Start building for free. Upgrade for higher usage limits, dedicated support, and flexible hosting options.
Free
No credit card required
Get started10K events per month
Up to 5 users
30d data retention
Unlimited indexed metrics
Full evaluation, observability, and prompt management suite
Let's chat
Ideal for scaling teams
Book a demoCustom usage limits
Unlimited users
Choose between multi-tenant SaaS, dedicated cloud, or self-hosting in VPC
SSO & SAML
Dedicated support, SLA, and security reviews
An event refers to a single trace span, structured log, or metric label combination sent to our API as OTLP or JSON. It captures any relevant data from your system, including all context fields generated by your application's instrumentation.
Automated Evaluators: An automated evaluator is a function (code or LLM) that helps you unit test any arbitrary event or combinations of events to generate a measurable score (and explanation, in case of LLM evaluators). Common examples of auto-evaluators include Context Relevance, Answer Faithfulness, ROUGE, BERTScore, and more. We provide many common evaluators out-of-the-box and allow defining custom evaluators within the platform.
Human Evaluators: We strongly encourage a hybrid-evaluation approach, i.e. combining automated techniques with human oversight. This helps you account for evaluation criteria bias and better align your evaluators with your domain experts' scoring rubric. To enable this, you can define custom scoring rubrics in HoneyHive for domain experts to use when evaluating outputs.
All data is secure and encrypted at rest and in transit. We are SOC-2 Type II, GDPR, and HIPAA compliant, conduct regular penetration tests via 3rd-party auditors, and provide flexible hosting solutions to meet your security and compliance needs. Contact us to learn more.
Yes, you can self-host HoneyHive in your Virtual Private Cloud (VPC) on the Enterprise plan. We support self-hosting across AWS, Azure, and GCP via Kubernetes, and are happy to provide additional support for highly custom deployments. Contact us to learn more.
No, we do not proxy your requests via our servers. Instead, we store prompts as YAML configurations, which can be deployed and fetched in your application logic using the GET Configuration API or by setting up a custom GitHub Workflow.
You can log traces using our SDKs and API endpoints, or async via our batch ingestion endpoint. We offer native SDKs in Python and Typescript with OpenTelemetry support, and provide automatic integrations with popular frameworks like LangChain, LlamaIndex, CrewAI, Vercel AI SDK, and others.
For users using other languages, you can send your OpenTelemetry traces to our OTel collector or manually instrument your application using our APIs.
Yes, we do offer startup discounts for companies with less than $5M of total funding raised. Contact us to learn more.