22/2/2026
To go beyond definitions and use cases, this article focuses on the practical side: testing structured data to validate Schema.org markup and avoid errors that prevent eligibility for rich results. If you need the full framework covering formats, prioritisation, SEO and GEO structured data, refer to our comprehensive guide on structured data.
How to Test Structured Data to Validate Schema.org Markup
A markup test is designed to confirm two complementary things: (1) your code follows the Schema.org vocabulary (types, properties, values) and (2) Google can use it to make the page eligible for specific rich results. The most reliable approach is a chained check:
- Validate Schema.org compliance with the official validator, to eliminate structural and syntax errors.
- Check Google eligibility with the Rich Results Test, to see what Google detects and what blocks a rich result.
- Monitor at site level in Google Search Console, under "Enhancements", to spot regressions after deployment.
This reflects how Google and Schema.org tools are intended to be used: "standard" validation on one side, then "Google-specific" eligibility checks on the other, followed by ongoing monitoring via Search Console.
What You Are Actually Checking When You Test Structured Data
Schema.org Validation vs Eligibility for Rich Results
A Schema.org validator answers: "Is my markup correct according to the standard?" Google's test answers: "Can Google use this markup for a given rich result display?"
- Schema.org compliance: recognised types and properties, consistent structure, correctly formatted values (URL, date, currency, and so on), regardless of format (JSON-LD, microdata, RDFa).
- Google eligibility: required fields for a rich result are present, the type matches Google's supported features, and the markup aligns with visible content. Google can refuse eligibility even when the code is technically "valid".
Worth noting: Google regularly states that structured data does not automatically improve rankings, but it helps with comprehension and can enable enhanced displays when its criteria are met (see Search Central documentation referenced in our main article).
URL vs Code Snippet: Which Mode Should You Use to Test Markup?
Testing tools almost always offer two modes, and they serve different purposes:
- Snippet test: ideal for quickly debugging JSON-LD before deployment, covering syntax, missing fields and structure. Think of it as the "pre-flight" stage.
- URL test: essential for confirming what Google can truly read on an accessible, live page, including rendered HTML, loaded resources and any client-side injection. This is the "production verification" stage.
In practice, fix issues in "snippet" mode first, then confirm everything using "URL" mode once deployed.
Prerequisites Before Using a Structured Data Testing Tool: Rendering, Crawling and Data Consistency
Before interpreting a structured data error, check three prerequisites, as they account for a large share of the gaps between tool output and reality:
- Rendering: the markup must exist in the rendered HTML, not only inside an application that executes after the initial page load.
- Crawling: the URL must be crawlable and not blocked, and must not be contradicted by canonicals or noindex directives if you expect it to be tracked in Search Console.
- Consistency: marked-up values must match visible content exactly, including price, availability, author, dates and address. Mismatches frequently lead to lost eligibility and can even trigger display penalties.
Which Testing Tools Should You Use to Check Schema.org Markup?
Rich Results Test: How to Use Google's Tool to Assess Eligibility
Google's Rich Results Test (search.google.com/test/rich-results) helps you check whether Google can correctly read a page and whether it qualifies for rich results. You can test a URL or paste code directly. The tool displays detected structured data, errors, warnings and a preview so you can understand how Google interprets the page.
Testing a Public URL with the Rich Results Test: Steps and Key Watch-Outs
- Paste the URL, ideally the accessible canonical version.
- Run the test and wait for the analysis to complete.
- Open the details for each detected type, such as Article, Product or FAQ.
- Note what is eligible versus not eligible, and read the explanation attached to each issue.
A key watch-out: if the page relies heavily on JavaScript or conditional rendering, results may differ from a simple code paste. That is precisely the value of a URL test — it validates "what Google sees" on the page that is actually served.
Testing a JSON-LD Snippet: Deployment, Fixes and Pre-Release Checks
Use "code" mode when iterating on markup, for example a Product or a BlogPosting:
- Paste the
<script type='application/ld+json'>…block, or a complete HTML snippet. - Fix any missing required fields flagged by the tool straight away.
- Once the test is clean, deploy to an accessible environment and re-test the URL.
A useful habit: build this check into your development workflow rather than waiting until the end of a redesign or feature release.
Reading the Output: Errors, Warnings and Eligible Items
- Error: blocks eligibility for the associated enhancement, for example a missing required property.
- Warning: does not always prevent eligibility, but signals incomplete data or an unmet recommendation — a potential missed opportunity.
- Detected items: the list of Schema types found and, where applicable, a preview.
Practical interpretation: a page might be read as an Article; a recipe page may include multiple blocks such as the recipe itself, reviews, a video and an image carousel. The goal is not to "force" an appearance, but to confirm the detected type matches the content and that nothing blocks eligibility.
Rich Snippet Checkers and Rich Snippet Tests: What They Show and Their Limits
When people talk about "checking a rich snippet", they are usually trying to answer two questions: "Am I eligible?" and "What is preventing the display?" The Rich Results Test is well suited to this, but it has a fundamental limitation: even with perfect markup, Google never guarantees that a rich result will actually appear. The engine decides based on relevance, quality and context, including device, browsing history and location.
Schema Markup Validator: A Schema Validator for Standards Compliance
The Schema Markup Validator (validator.schema.org) validates markup against Schema.org standards, independently of Google's eligibility rules. It supports the main formats — JSON-LD, microdata and RDFa — and helps you confirm syntax and structure.
What the Tool Checks: Types, Properties, Values and Structure
- Types: your
@typeexists and matches the Schema.org vocabulary. - Properties: fields are permitted for the declared type and nested correctly.
- Values: expected formats such as URL, number, text and date are consistent.
- Structure: objects, lists and relationships between entities, for example
Productlinking toOffer.
Spotting Invalid, Missing or Wrongly Typed Fields with a Schema Checker
This validator is particularly useful for catching pure modelling errors that can be missed when only Google's tool is used: a property placed at the wrong level, a value typed as text where a URL is expected, or an inconsistent type. These issues often explain markup that is "detected" but unusable, or behaviour that varies between search engines.
Common Cases: Nested Objects, Arrays, Dates, Currencies and Value Consistency
- Nested objects: an
Offershould be a coherent object, not a flat list of fields. - Arrays: some fields accept lists, such as multiple images, while others require a single value.
- Dates: non-standard formats or empty fields are particularly common on
Eventmarkup. - Currencies: inconsistencies between
priceandpriceCurrency, or textual values such as "199,90 €" instead of a numeric value plus a currency code. - Consistency: marked-up price or stock that differs from what users see on the page is a frequent reason for refused enhancements.
Schema.org Validator: Good Practice for a Successful Schema.org Test
- Test even when nothing has changed: standards evolve, and older markup can become incomplete or inconsistent over time.
- Test before going live to avoid regressions on high-volume templates.
- Stay faithful to visible content: avoid marking up anything that is not shown to users or that could be misleading, as this risks losing eligibility and triggering display penalties.
- Combine with Google's test: validate Schema.org compliance first, then check Google eligibility.
Google Search Console: Checking Structured Data at Site Scale
Search Console does not replace a page-level test — it is a monitoring tool. It watches and reports markup issues detected during crawling, providing alerts, errors and warnings.
Where to Find "Enhancements" Reports Related to Structured Data
In Google Search Console, open the "Enhancements" section (or "Rich results", depending on the interface and detected types). You will find reports organised by category — products, FAQ, breadcrumbs and so on — along with the impacted URLs.
Understanding Statuses: Valid, Valid with Warnings, Invalid
- Valid: Google detects the markup and reports no blocking issues.
- Valid with warnings: an enhancement may be possible, but recommended properties are missing or improvements are suggested.
- Invalid: blocking errors are present; the page or type cannot benefit from the associated feature.
Linking an Issue to Templates Rather Than Individual URLs
When Search Console flags dozens or hundreds of URLs, look for the root cause at template level: an empty CMS field, a rendering condition, a front-end change, or a mis-versioned Schema mapping rule. Fixing the template is almost always far more effective than patching URLs one by one.
After a Fix: Validation, Tracking and Processing Time
Once you have made a fix, start the validation process in Search Console to trigger resolution tracking. You can also request reindexing to speed up changes for critical pages. Bear in mind that the appearance or return of a rich result display can take time and is never guaranteed, even when the markup is correct.
Diagnosing and Fixing Structured Data Tag Errors
Detected Types and Compatible Rich Results: What This Means in Practice
A sound diagnosis begins by checking whether the detected type matches the page's primary topic. Using the wrong type — for example, marking up a general article as a recipe — can cause confusion, cascading errors and make enhancements impossible. Aim for simple, robust mapping: one relevant primary type, then secondary types only where the page genuinely contains that content, such as a real FAQ, a real video or real product offers.
Blocking Errors vs Warnings: How to Prioritise SEO Fixes
Always prioritise in this order:
- Blocking errors: invalid syntax, missing required properties, inconsistent types, values in the wrong format.
- Warnings: missing recommended properties, addressed first on the highest-impact pages in terms of traffic, conversion, local visibility and strategic importance.
This triage is essential on large sites: it prevents you from spending time on completeness when a structural blocker means the page is not eligible for enhancements in the first place.
Rich Snippet Previews: What You Can and Cannot Conclude About SERP Display
A preview in a testing tool shows how Google interprets the page and what could be displayed if an enhancement is granted. However, it does not prove the rich result will appear in the SERPs: Google may still choose not to show it based on relevance, perceived quality and context.
The Wrong Schema Type: A Method to Choose Without Over-Marking Up
Work through three simple questions:
- What is the main topic of the page for a human visitor — product, service, article or local page?
- Which information is genuinely visible and verifiable — price, stock, author, date, address, FAQ entries, steps?
- Which Schema.org type most accurately describes that content without trying to force an enhancement?
Avoid adding unnecessary markup: over-marking can be interpreted as structured data spam and may cause you to lose eligibility altogether.
Missing Required Properties: How to Spot and Complete Them
Testing tools typically identify the missing property and the affected item. To fix issues efficiently:
- Identify the data source — a CMS field, a front-end component or an editorial block.
- Add the property at the correct level, for example
offerswithinProductrather than at the top level of the page. - Ensure the value is visible on the page and in the correct format.
- Re-test the snippet, then the URL.
Typical blockers include missing event dates for Event, missing review aggregation for certain review enhancements, and incomplete contact details for LocalBusiness.
Data That Does Not Match Visible Content: Risks and Best Practice
A critical rule: never declare information in structured data that is not visibly present on the page. This includes non-existent reviews, a different price from the one displayed, availability that is not shown, or unverifiable marketing claims. This kind of mismatch frequently leads to lost eligibility for rich results and can be treated as misleading by Google.
Format Errors: Invalid JSON, Encoding, Quotation Marks, Commas and Units
The most basic errors are also the most common:
- Invalid JSON caused by a trailing comma, curly quotes or missing braces.
- Incorrect encoding or escaping in text content, particularly with special characters.
- Mixed units or formats, for example "199,90 €" instead of
199.90plusEUR. - Text provided where a URL is expected, such as the
availabilityproperty, which should point to a Schema.org URL.
An operational tip: correct these in snippet mode first, then validate on the final URL after deployment.
Markup Conflicts: Duplicate JSON-LD, Microdata and Multiple Tags
Conflicts arise when multiple sources generate competing markup, such as a template, a module and an injection all running simultaneously. Common symptoms include multiple different Product entities on one page, contradictory values, or an inconsistent mix of Microdata and JSON-LD. In these situations, the goal is not to add more markup but to deduplicate and maintain a single, faithful and maintainable representation.
Microdata vs JSON-LD: Avoiding Inconsistencies Between HTML Attributes and Script Blocks
If you still use microdata attributes such as itemscope, itemtype and itemprop alongside JSON-LD, ensure both describe exactly the same entity. Otherwise, Google may hesitate between the two, pick one arbitrarily or flag inconsistencies. If you are in the process of migrating, keep any overlap period short and test key page templates systematically.
Validation Workflow: Test Before and After Going Live
Quick-Check Checklist for a Critical Page
- The primary type matches the page's main topic.
- The markup is present in the rendered HTML of the final URL.
- No blocking errors appear in the Rich Results Test.
- The Schema.org validator reports no structure or typing errors.
- All structured values are visible on the page, including price, dates, reviews and contact details.
- No contradictory duplicates or competing blocks are present.
Sampling vs Exhaustive Testing: How to Decide
On large sites, opt for intelligent sampling: the pages that drive the most traffic, your main templates and representative variants such as mobile versus desktop, in-stock versus out-of-stock, or past versus upcoming events. Then rely on Search Console for site-wide monitoring. This approach avoids costly exhaustive testing while significantly reducing the risk of regressions going undetected.
Change Tracking: What to Document to Save Time Later
Document the mapping between CMS fields and Schema.org properties: the data source, the expected format, conditional rules for when a field is absent, and the owner — whether that is a template, a component or a module. When a warning surfaces, you can trace the cause far more quickly without having to repeat the entire audit from scratch.
When to Re-Run a Schema Markup Validator After a Template Update
Re-validate after any change that affects marked-up information: front-end redesigns, product or service template changes, adding a reviews module, modifying pricing or stock rules, internationalisation work, or automations that alter visible content. This is a core habit of a thorough technical SEO audit: prevent regressions rather than react to them.
GEO Angle: Impact on Visibility in Generative AI Answers
From Search to Assistants: Readability, Disambiguation and Trust
As search engines and AI assistants increasingly rely on discrete facts — prices, availability, named entities, contact details, process steps and FAQs — clean and consistent markup helps with disambiguation and reduces contradictions. This is particularly relevant as search shifts towards zero-click behaviour (Semrush, 2025, cites 60% of searches ending without a click, as referenced in our SEO statistics) and AI-generated answers take a growing role in user journeys.
What You Can Measure in Practice: Indexation, Observable Signals and Reports
In the short term, focus on measurable, observable signals rather than vague promises: (1) no errors in Search Console reports, (2) stable impressions, clicks and CTR on pages with active enhancements, (3) consistent entities — the same organisation name, contact details and conventions — across all key pages. To broaden your analysis towards AI visibility and GEO indicators, you can also consult our GEO statistics, which provide useful SERP trend context.
Tracking Structured Data Issues with Incremys (Google Search Console API)
Centralising Alerts and Prioritising Fixes Through a Data-Driven SEO Audit
To industrialise monitoring, Incremys leverages the Google Search Console API to centralise structured data errors and warnings, placing them within a broader prioritisation view that covers high-impact pages, affected templates and 360° SEO monitoring — including aggregated data from both Search Console and Google Analytics. This continuous analysis approach sits within the SEO 360° Audit module, without replacing the page-level checks that remain necessary during development.
FAQ: Testing, Tools and Interpreting Results
How Can You Test Your Structured Data Reliably?
Follow a simple chain: (1) validate standards compliance with validator.schema.org, (2) test Google eligibility with the Rich Results Test — snippet first, then URL — and (3) monitor "Enhancements" reports in Search Console to catch regressions early.
Which Testing Tool Should You Choose: Rich Results Test, Schema Markup Validator or Google Search Console?
They are complementary rather than interchangeable. The Schema.org validator checks standards compliance, the Rich Results Test checks Google eligibility and how the page is read, and Search Console handles site-wide monitoring including alerts, trends and post-fix validation.
How Do You Interpret an Error Versus a Warning in a Test?
An error is typically blocking for the targeted enhancement — a missing required field, an inconsistent type or an invalid format. A warning usually means a recommended field is absent or data completeness could be improved; prioritise these on your most strategically important pages.
Why Might a Validated Page Still Not Show a Rich Snippet?
Because testing tools validate eligibility and interpretation, not Google's final display decision. Google may choose not to show a rich result based on relevance, perceived quality, context such as device, location and browsing history, and competition within the SERP.
What Should You Do if Google Search Console Flags an Issue but the Rich Results Test Does Not?
Check (1) whether the issue relates to a different enhancement type from the one you tested, (2) whether the URL you tested is not the exact URL being crawled due to canonical settings or parameters, and (3) whether the markup varies depending on rendering, such as through an A/B test, personalisation or injection. Then fix the issue at template level and trigger validation in Search Console.
Do You Need to Test Every Page or Just Templates?
On large websites, prioritise template-based testing and intelligent sampling — covering key templates and high-traffic pages — then let Search Console surface issues at scale. Exhaustive page-by-page testing is only practical for small sites or a tightly defined critical scope.
How Do You Handle Conflicts Between JSON-LD and Microdata?
Avoid maintaining two different versions of the truth simultaneously. If JSON-LD and microdata coexist, enforce strict consistency of values and types across both. If contradictory duplicates are present, remove the redundant source — whether a module, a template or an injection — and re-test the URL.
How Often Should You Re-Run a Rich Snippet Test After a Content or Template Update?
Test every time a change affects marked-up information, including price, stock levels, dates, author, contact details or FAQ content, or whenever the way that information is rendered changes. Also establish a regular review routine, as Schema.org standards and Google's display requirements continue to evolve.
Do Structured Data Improvements Also Help with GEO Visibility in Generative AI Answers?
They do not guarantee a citation, but they can make information more readable and less ambiguous for automated systems including search engines, AI assistants and language models. As search continues to evolve — with zero-click results, AI overviews and large-scale content production — consistent, well-tested markup reduces contradictions and improves the quality of usable signals. To explore this further, consult our SEA statistics for acquisition context and our resources on schema SEO.
To keep improving your content across SEO, GEO and technical performance, explore all our analysis and resources on the Incremys Blog.
.png)
%2520-%2520blue.jpeg)

.jpeg)
.jpeg)
%20-%20blue.jpeg)
.jpg)
.jpg)
.avif)