Documentation
Getting Started
Learn how to install and run CrawlBeacon in under a minute.
This route includes static HTML in the production build so search engines can see meaningful metadata, copy, and internal links before JavaScript runs.
What this page covers
Install and run CrawlBeacon CLI to check your site's SEO health. Validate meta tags, structured data, images, and robots.txt from your terminal or CI/CD pipeline.
- This documentation page explains the workflow behind getting started and where it fits in the CrawlBeacon toolchain.
- Examples, configuration notes, and related references make it easier to move from setup to automation.
- The linked documentation pages create a crawlable path through the product knowledge base.
Why this page matters
CrawlBeacon documentation is organized around quick setup, analyzer reference material, and automation guidance for engineering teams.
Keeping these pages in static HTML gives crawlers distinct content and stronger internal links across the docs section.
Related CrawlBeacon resources
Analyzers Reference
Complete reference for all CrawlBeacon analyzers: meta tags, structured data, image SEO, and robots.txt. Every issue code, severity, and trigger explained.
CI/CD Integration
Add CrawlBeacon SEO checks to your CI/CD pipeline. GitHub Actions example, exit codes, JSON output for parsing, and programmatic API usage.
CLI Reference
Complete CrawlBeacon CLI reference. All commands, flags, output formats, scoring thresholds, and upcoming configuration file support.