英文

Wav2Vec2-Conformer-Large-960h 带有旋转位置嵌入

Wav2Vec2 Conformer 是一个预训练模型,使用旋转位置嵌入,在 960 小时的 Librispeech 数据上进行了微调,该数据以 16kHz 采样的语音音频进行训练。在使用该模型时,请确保输入的语音也是以 16kHz 进行采样的。

论文: fairseq S2T: Fast Speech-to-Text Modeling with fairseq

作者:Changhan Wang,Yun Tang,Xutai Ma,Anne Wu,Sravya Popuri,Dmytro Okhonko,Juan Pino

Wav2Vec2-Conformer 的结果可以在 official paper 的表3和表4中找到。

原始模型可以在 https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20 找到。

用法

可以将该模型作为独立的声学模型来转录音频文件,方法如下:

 from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
 from datasets import load_dataset
 import torch
 
 # load model and processor
 processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
 model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
     
 # load dummy dataset and read soundfiles
 ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
 
 # tokenize
 input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
 
 # retrieve logits
 logits = model(input_values).logits
 
 # take argmax and decode
 predicted_ids = torch.argmax(logits, dim=-1)
 transcription = processor.batch_decode(predicted_ids)

评估

以下代码片段展示了如何在 LibriSpeech 的 "clean" 和 "other" 测试数据上评估 facebook/wav2vec2-conformer-rope-large-960h-ft。

from datasets import load_dataset
from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor
import torch
from jiwer import wer


librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")

model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")

def map_to_pred(batch):
    inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
    input_values = inputs.input_values.to("cuda")
    attention_mask = inputs.attention_mask.to("cuda")
    
    with torch.no_grad():
        logits = model(input_values, attention_mask=attention_mask).logits

    predicted_ids = torch.argmax(logits, dim=-1)
    transcription = processor.batch_decode(predicted_ids)
    batch["transcription"] = transcription
    return batch

result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])

print("WER:", wer(result["text"], result["transcription"]))

结果(WER):

"clean" "other"
1.96 3.98