What it's for
- Discover and fix crawl budget issues that affect how search engines crawl and index your website
- Find documents that are inaccessible to users or are at risk of becoming isolated from the link graph
- Identify the pages that are most important for navigation within the internal site structure and require a good usability
- Optimize the internal navigation structure and paths to access documents for users and search engine bots
How it works
Structure optimization involves three stages that build on each other to be effective
First, correct obvious errors and crawl budget problems. Next, optimize the website architecture. Finally, develop automated algorithms that boost important pages within the internal link graph.
Crawl budget optimization
Find broken links and unnecessary redirects
Identify pages and resources that consume crawl budget due to responding with an error or redirect HTTP status code. Correct broken links and reduce redirects by directly linking to the correct target URLs.
Duplicate content audit
Discover URLs with duplicate content that might be penalized by search engines. Remove the pages to free up crawl budget or customize the pages to allow them to rank.
Find crawl budget problems as a result of similar URLs, often generated by fault tolerant URL handling or unsorted parameters in combination with facetted navigation.
Redistribute crawl budget by reducing the amount of non-indexable URLs due to noindex meta tags or canonical tags pointing to different URLs. Remove those pages or improve them and set them to index.
Identify and improve thin content pages like listings with too few elements or content pages that should be improved by adding elements (e.g. headlines, images or tables), by using Regex or XPath expressions.
Website architecture optimization
Reachability and level architecture
Analyze differences in the level architecture between users and bots by examining the navigation paths. Find documents that are isolated or unreachable for users e.g. due to missing links or depths.
PageRank, CheiRank and 2D-Rank
Identify authorities, hubs and key pages by calculating PageRank, CheiRank and 2D-Rank. Make data-driven decisions by comparing changes between different crawls for important pages and page types.
Segmentation by page type
Collect and analyse data for page types (e.g. products, categories, pagination) by segmenting pages based on URL, status code, indexability or any property within the HTML matchable by string comparison, Regex or XPath expressions.
Simulate SEO changes
Rewrite URL patterns to exclude or change URLs when crawling and simulate and evaluate a variety of changes (e.g. removing redirect, combining URLs, removing URLs) without actually implementing the changes first.
Boost important pages via API data
Pull data like links, PageRank, CheiRank and 2D-Rank from our API and use it to improve automated internal linking e.g. within your recommendation engine or within teaser boxes to push pages that could benefit from more links.