# tensorzero
**Repository Path**: mirrors/tensorzero
## Basic Information
- **Project Name**: tensorzero
- **Description**: TensorZero 创建了一个用于优化 LLM 应用程序的反馈循环——将生产数据转化为更智能、更快、更便宜的模型
- **Primary Language**: Python
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: https://www.oschina.net/p/tensorzero
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 1
- **Created**: 2025-06-09
- **Last Updated**: 2025-06-12
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# TensorZero
Website
·
Docs
·
Twitter
·
Slack
·
Discord
Quick Start (5min)
·
Comprehensive Tutorial
·
Deployment Guide
·
API Reference
·
Configuration Reference
What is TensorZero? | TensorZero is an open-source framework for building production-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluations, and experimentation. |
How is TensorZero different from other LLM frameworks? |
1. TensorZero enables you to optimize complex LLM applications based on production metrics and human feedback. 2. TensorZero supports the needs of industrial-scale LLM applications: low latency, high throughput, type safety, self-hosted, GitOps, customizability, etc. 3. TensorZero unifies the entire LLMOps stack, creating compounding benefits. For example, LLM evaluations can be used for fine-tuning models alongside AI judges. |
Can I use TensorZero with ___? | Yes. Every major programming language is supported. You can use TensorZero with our Python client, any OpenAI SDK, or our HTTP API. |
Is TensorZero production-ready? | Yes. Here's a case study: Automating Code Changelogs at a Large Bank with LLMs |
How much does TensorZero cost? | Nothing. TensorZero is 100% self-hosted and open-source. There are no paid features. |
Who is building TensorZero? | Our technical team includes a former Rust compiler maintainer, machine learning researchers (Stanford, CMU, Oxford, Columbia) with thousands of citations, and the chief product officer of a decacorn startup. We're backed by the same investors as leading open-source projects (e.g. ClickHouse, CockroachDB) and AI labs (e.g. OpenAI, Anthropic). |
How do I get started? | You can adopt TensorZero incrementally. Our Quick Start goes from a vanilla OpenAI wrapper to a production-ready LLM application with observability and fine-tuning in just 5 minutes. |
Model Providers | Features |
The TensorZero Gateway natively supports:
Need something else? Your provider is most likely supported because TensorZero integrates with any OpenAI-compatible API (e.g. Ollama). |
The TensorZero Gateway supports advanced features like:
The TensorZero Gateway is written in Rust 🦀 with performance in mind (<1ms p99 latency overhead @ 10k QPS).
See Benchmarks. You can run inference using the TensorZero client (recommended), the OpenAI client, or the HTTP API. |
Supervised Fine-tuning — UI | Preference Fine-tuning (DPO) — Jupyter Notebook |
Best-of-N Sampling | Mixture-of-N Sampling |
Dynamic In-Context Learning (DICL) | Chain-of-Thought (CoT) |
MIPROv2 | DSPy Integration |
TensorZero comes with several optimization recipes, but you can also easily create your own. This example shows how to optimize a TensorZero function using an arbitrary tool — here, DSPy, a popular library for automated prompt engineering. |
Observability » Inference | Observability » Function |
Evaluation » UI | Evaluation » CLI |
|