Skip to content

matthiaskern/braintrust-proxy

Repository files navigation

Braintrust AI Proxy

The Braintrust AI proxy is the easiest way to access the world's best AI models with a single API, including all of OpenAI's models, Anthropic models, LLaMa 2, Mistral, and others. The proxy:

  • Simplifies your code by providing a single API across AI providers.
  • Reduces your costs by automatically caching results and reusing them when possible.
  • Increases observability by automatically logging your requests. [Coming soon]

To read more about why we launched the AI proxy, check out our blog post announcing the feature.

This repository contains the code for the proxy — both the underlying implementation and wrappers that allow you to deploy it on Vercel, Cloudflare, AWS Lambda, or an Express server.

Just let me try it!

You can communicate with the proxy via the standard OpenAI drivers/API, and simply set the base url to https://proxy.braintrustapi.com/v1. Try running the following script in your favorite language, twice.

Typescript

import { OpenAI } from "openai";
const client = new OpenAI({
  baseURL: "https://proxy.braintrustapi.com/v1",
  apiKey: process.env.OPENAI_API_KEY, // Can use Braintrust, Anthropic, etc. keys here too
});

async function main() {
  const start = performance.now();
  const response = await client.chat.completions.create({
    model: "gpt-3.5-turbo", // // Can use claude-2, llama-2-13b-chat here too
    messages: [{ role: "user", content: "What is a proxy?" }],
    seed: 1, // A seed activates the proxy's cache
  });
  console.log(response.choices[0].message.content);
  console.log(`Took ${(performance.now() - start) / 1000}s`);
}

main();

Python

from openai import OpenAI
import os
import time

client = OpenAI(
  base_url="https://proxy.braintrustapi.com/v1",
  api_key=os.environ["OPENAI_API_KEY"], # Can use Braintrust, Anthropic, etc. keys here too
)

start = time.time()
response = client.chat.completions.create(
	model="gpt-3.5-turbo", # Can use claude-2, llama-2-13b-chat here too
	messages=[{"role": "user", "content": "What is a proxy?"}],
	seed=1, # A seed activates the proxy's cache
)
print(response.choices[0].message.content)
print(f"Took {time.time()-start}s")

cURL

time curl -i https://proxy.braintrustapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {
        "role": "user",
        "content": "What is a proxy?"
      }
    ],
    "seed": 1
  }' \
  -H "Authorization: Bearer $OPENAI_API_KEY" # Can use Braintrust, Anthropic, etc. keys here too

Docs

You can find the full documentation for using the proxy here. The proxy is hosted for you, with end-to-end encryption, at https://proxy.braintrustapi.com/v1. However, you can also deploy it yourself and customize its behavior.

To see docs for how to deploy on various platforms, see the READMEs in the corresponding folders:

Developing

To build the proxy, install pnpm and run:

pnpm install
pnpm build

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 96.8%
  • JavaScript 3.2%