Leave No Knowledge Behind During Knowledge Distillation: Towards Practical and Effective Knowledge Distillation for Code-Switching ASR Using Realistic Data
Paper
•
2407.10603
•
Published
Liang-Hsuan Tseng, Zih-Ching Chen, Wei-Shun Chang, Cheng-Kuang Lee, Tsung-Ren Huang, Hung-yi Lee
⚠️ Due to privacy and security concerns, this model will be temporarily taken offline. We are sorry for the inconvenience.
⚠️ 因為隱私安全疑慮,本模型將暫時下架。非常抱歉造成大家困擾。
transformers, please visit https://huggingface.co/andybi7676/cool-whisper-hffrom faster_whisper import WhisperModel
import soundfile as sf
model_card = "andybi7676/cool-whisper"
audio_fpath = "/your/path/to/audio.wav"
audio_info = sf.info(audio_fpath)
print(audio_info) # for debug
model = WhisperModel(model_card, device="cuda", compute_type="float16")
segments, info = model.transcribe(audio_fpath, beam_size=5, language="zh", condition_on_previous_text=True) # zh for zh-en code-switching in cool-whisper
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))