One policy layer for every service. Zero code changes.
Declarative rate limiting, cost budgets, and abuse protection — managed as config, enforced at the edge. Your platform team defines the rules. Services just work.
What Fairvisor Does for Platform Engineers
Policy as Config
JSON policies, versioned in git or managed in the SaaS UI. One source of truth. Applied to all services via the edge — no per-service implementation. → Policy bundle docsGit-Driven Policy Management
source_of_truth: git. Push policies to a repo. Fairvisor validates, builds bundles, and deploys to all edges. GitOps-native from day one. → Shadow mode rollout
CLI for Developer Workflow
fairvisor init, validate, test --dry-run, connect, status, logs — full workflow from scaffolding to production. → CLI reference
Two Deployment Modes
Reverse proxy (standalone, in front of your service) or decision service (sidecar, behind your existing ingress). Both work. Pick what fits your infra. → Deployment docsKubernetes-Native
/livez and /readyz health checks, Prometheus /metrics endpoint, Helm chart, StatefulSet or Deployment. Works with any service mesh. → Helm guide
Policy Templates
Pre-built templates for LLM API protection, B2B SaaS tiered limits, and webhook protection. Instantiate, customize, deploy. → Golden bundlesObservability Built In
Prometheus metrics for every decision:fairvisor_decisions_total, fairvisor_loops_detected_total, fairvisor_circuit_breaker_trips_total, fairvisor_decision_duration_seconds. Plug into Grafana. Alert in PagerDuty. → Metrics reference
Environment Promotion Controls
Promote the same policy set from dev → staging → prod with guardrails and diff visibility. Roll forward and rollback stay deterministic across environments.Day-2 Operations
What running Fairvisor in production looks like after week one:
Policy changes
go throughgit push → validate → bundle → deploy. Same review process as code. Full diff history. Rollback to any version in seconds. → Operations hub
Incidents
start with decision tracing: use reject headers first, then debug session headers for policy/rule attribution. No blind log hunting. → Decision tracingEmergency response
uses the kill-switch: freeze a tenant or route with a rapid, role-gated, audit-logged workflow. Propagation timing should be validated in your deployment runbook. → Kill-switch incident responseRunbooks
available for every operational scenario: reject spikes, budget exhaustion, bad bundle rollback, SaaS disconnect. → Runbooks | Policy lint checklistWho This Is For
- Platform and infrastructure teams owning shared rate limiting across services
- Staff engineers standardizing API governance across microservices
- DevOps teams building internal developer platforms
- Engineering orgs where every team re-implements rate limiting differently
FAQ
How do teams deploy their own rate limits without involving ops?
Policies are JSON files in git. Teams write and review policies in the same repo with the same PR process as code. The platform team sets guardrails; product teams self-serve within them — no ticket, no ops dependency.How does policy versioning work?
Every policy change creates a new immutable version. Full diff history. Rollback to any previous version in seconds via CLI or SaaS UI. You always know exactly what was enforced at any point in time — useful for both debugging and audits.What deployment modes are supported?
Reverse proxy (standalone, Fairvisor in front of your service) or decision service (sidecar, behind your existing ingress viaauth_request/ext_authz). Both modes share the same policy language, observability, and operational tooling. → Deployment docs
How does shadow mode prevent bad policies from causing outages?
Shadow mode logs what would be throttled or rejected without enforcing anything. Run against real production traffic for a day or a week. Tune thresholds on real data. Promote to enforcement when you’re confident. → Shadow mode rolloutHow do I integrate Fairvisor with our observability stack?
Prometheus metrics endpoint out of the box — plug into Grafana, alert in PagerDuty, ship to Datadog or Splunk. Standard/metrics endpoint, standard label names. No custom exporters. → Metrics reference
Is Fairvisor Kubernetes-native?
Yes./livez and /readyz health checks, Prometheus /metrics, Helm chart, StatefulSet and Deployment support. Works with any service mesh including Istio and Linkerd. → Helm guide