Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Real time whisper transcription #405

Open
vjeux opened this issue Nov 19, 2023 · 10 comments · May be fixed by #545
Open

[Feature request] Real time whisper transcription #405

vjeux opened this issue Nov 19, 2023 · 10 comments · May be fixed by #545
Labels
enhancement New feature or request

Comments

@vjeux
Copy link

vjeux commented Nov 19, 2023

Real time whisper transcription

Right now the demo works for a recording but does it in one shot. I'd love to be able to do it as I speak. Sadly the interface seems to be accepting only a Float32Array (or arrays of) and not a way to keep feeding it float32 arrays as we receive them from the audio source.

Would be great to be able to do it in a streaming fashion.

Reason for request

I want to build a tool to help recording off voice and want to get a real time transcription to overlay on-top of the existing one to help get a sense of progress.

Thanks <3

@vjeux vjeux added the enhancement New feature or request label Nov 19, 2023
@xenova
Copy link
Owner

xenova commented Nov 23, 2023

Real-time transcription will hopefully be possible once webgpu support is added, and we'll definitely revisit (and update the demo) once it is. If someone in the community would like to try modify the whisper-web source code (or provide a basic streaming) implementation, which could be adapted once webgpu is supported, that would be great! 😇

@vjeux
Copy link
Author

vjeux commented Nov 23, 2023

Curious why is it waiting for WebGPU, at least on my macbook pro pre-m1, the decoding is faster than the time of the recording. What would be needed is to be able to feed audio frames in an async way instead of all at once.

@xenova
Copy link
Owner

xenova commented Nov 23, 2023

The major bottleneck at the moment is the encoder, which can take a few seconds to process ~30 seconds. Ideally, if we were to process shorter audio sequences, it would take much shorter, however, this is a hard constraint of the architecture. The initial transformations into log-mel spectrogram space produce 30 second chunks that are fed into the encoder. See here for more discussion on this.

@vjeux
Copy link
Author

vjeux commented Nov 28, 2023

Sorry for the super late reply. That makes sense. Thanks for the link to the discussions. Let me bring more visibility to this issue see if someone is interested in contributing.

@luwes
Copy link

luwes commented Dec 7, 2023

it's not real time but it might give someone some inspiration for chunked processing.
I created this custom video element that automatically generates captions from the source (mp4 only atm)
repo: https://github.com/luwes/ai-media-element
demo: https://luwes.github.io/ai-media-element/

@arpu
Copy link

arpu commented Dec 26, 2023

does onnx deprecate the webgl backend?

@xenova xenova linked a pull request Jan 27, 2024 that will close this issue
18 tasks
@avie41
Copy link

avie41 commented Feb 19, 2024

Hi luwes, xenova,
Did you finally manage to implement realtime transcription with Whisper ? Do you think it is still too early to think about it regarding the required processing time for the encoder when running the inference ?

@everythinginjs
Copy link

Hi @xenova ,
a must-have feature looking forward any updates ?

@xenova
Copy link
Owner

xenova commented Sep 6, 2024

This is now possible with Transformers.js v3: https://x.com/xenovacom/status/1799110540700078422 🥳
Online demo: https://huggingface.co/spaces/Xenova/realtime-whisper-webgpu

whisper-realtime.mp4

I'll close this issue once Transformers.js v3 is officially out and #545 is merged 🚀

@vjeux
Copy link
Author

vjeux commented Sep 6, 2024

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants