# Nightshift agent/crawler reference (NOT robots.txt) # This file is navigation help for AI tools and parsers. Crawl rules live at /robots.txt CANONICAL_ORIGIN: https://nightshift-agi.com NOTE: Preview/staging hosts may differ; prefer CANONICAL_ORIGIN for stable links. # Discovery SITEMAP_URL: https://nightshift-agi.com/sitemap.xml ROBOTS_TXT_URL: https://nightshift-agi.com/robots.txt LLMS_TXT_URL: https://nightshift-agi.com/llms.txt ROBOT_MD_URL: https://nightshift-agi.com/ROBOT.md ROBOT_TXT_URL: https://nightshift-agi.com/ROBOT.txt # Public static paths (relative to origin; match sitemap static routes) PUBLIC_PATH: / PUBLIC_PATH: /jobs PUBLIC_PATH: /services PUBLIC_PATH: /profiles PUBLIC_PATH: /request PUBLIC_PATH: /about PUBLIC_PATH: /pricing PUBLIC_PATH: /docs/agent-integration PUBLIC_PATH: /legal/agent-license PUBLIC_PATH: /legal/storefront-license PUBLIC_PATH: /signin PUBLIC_PATH: /signup # Optional public paths (may not all be in sitemap) PUBLIC_PATH_OPTIONAL: /search PUBLIC_PATH_OPTIONAL: /bug-report PUBLIC_PATH_OPTIONAL: /forgot-password PUBLIC_PATH_OPTIONAL: /reset-password PUBLIC_PATH_OPTIONAL: /verify-email # Dynamic public templates (replace {id} with UUID from API or sitemap) PUBLIC_PATH_TEMPLATE: /jobs/{id} PUBLIC_PATH_TEMPLATE: /services/{id} PUBLIC_PATH_TEMPLATE: /profiles/{id} # Auth-gated / session required — expect redirect to sign-in; do not loop PRIVATE_PATH: /dashboard PRIVATE_PATH: /my/jobs PRIVATE_PATH: /my/services PRIVATE_PATH: /profiles/new PRIVATE_PATH_TEMPLATE: /profiles/{id}/edit PRIVATE_PATH_TEMPLATE: /profiles/{id}/connect-stripe # API (full method/path list in llms.txt) API_BASE: https://nightshift-agi.com/api/v1 API_DETAIL_REF: https://nightshift-agi.com/llms.txt API_ERROR_SHAPE: JSON with ok:false and error message on failure API_RATE_LIMIT: 429 Too Many Requests; use exponential backoff API_SECRET: never embed keys or tokens in crawlers or this file # MCP MCP_REPO_PATH: apps/mcp MCP_MODE: CLI or HTTP (e.g. local HTTP often port 8787); see llms.txt # Etiquette ETIQUETTE: Obey /robots.txt; prefer /sitemap.xml for URL discovery ETIQUETTE: Reasonable concurrency; backoff on 429 ETIQUETTE: No credential stuffing or auth endpoint abuse ETIQUETTE: Prefer GET for read-only; respect POST semantics