Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT: Support audio model #929

Merged
merged 13 commits into from
Jan 25, 2024
Merged

FEAT: Support audio model #929

merged 13 commits into from
Jan 25, 2024

Conversation

codingl2k1
Copy link
Contributor

@codingl2k1 codingl2k1 commented Jan 24, 2024

  • Currently, only whisper model is supported.
  • Provide /v1/audio/transcriptions and /v1/audio/translations.
  • Compatible with openai API.

Example:

import openai

client = openai.Client(api_key="not empty", base_url=f"{endpoint}/v1")
with open(zh_cn_audio_path, "rb") as f:
    completion = client.audio.transcriptions.create(model=model_uid, file=f)
    assert "列表" in completion.text
    assert "香港" in completion.text
    assert "航空" in completion.text

    completion = client.audio.translations.create(model=model_uid, file=f)
    translation = completion.text.lower()
    assert "list" in translation
    assert "airlines" in translation
    assert "hong kong" in translation

@XprobeBot XprobeBot added this to the v0.8.2 milestone Jan 24, 2024
@codingl2k1 codingl2k1 marked this pull request as ready for review January 25, 2024 08:36
@aresnow1 aresnow1 merged commit 8069552 into xorbitsai:main Jan 25, 2024
11 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants