EI / SCOPUS / CSCD 收录

中文核心期刊

王猛, 张鹏远. 融合多尺度特征的短时音频场景识别方法[J]. 声学学报, 2022, 47(6): 717-726. DOI: 10.15949/j.cnki.0371-0025.2022.06.002
引用本文: 王猛, 张鹏远. 融合多尺度特征的短时音频场景识别方法[J]. 声学学报, 2022, 47(6): 717-726. DOI: 10.15949/j.cnki.0371-0025.2022.06.002
WANG Meng, ZHANG Pengyuan. Short-time acoustic scene recognition method using multi-scale feature fusion[J]. ACTA ACUSTICA, 2022, 47(6): 717-726. DOI: 10.15949/j.cnki.0371-0025.2022.06.002
Citation: WANG Meng, ZHANG Pengyuan. Short-time acoustic scene recognition method using multi-scale feature fusion[J]. ACTA ACUSTICA, 2022, 47(6): 717-726. DOI: 10.15949/j.cnki.0371-0025.2022.06.002

融合多尺度特征的短时音频场景识别方法

Short-time acoustic scene recognition method using multi-scale feature fusion

  • 摘要: 为解决短时音频场景识别任务中识别性能差的问题,提出一种融合多尺度特征的音频场景识别方法。首先将双声道音频中左右声道的和差作为输入,并使用长时帧长进行分帧处理,以保证提取出的帧级特征中包含足够多的音频信息。然后将特征逐帧输入到融合多尺度特征的一维卷积神经网络中,以充分利用网络中不同尺度的浅层、中层和深层嵌入特征。最后综合所有帧级软标签得到短时音频的场景分类结果。实验结果表明,该方法在国际声学场景和事件检测与分类挑战赛(DCASE)2021短时音频场景数据集上的准确率为79.02%,实现了该数据集上目前为止的最优性能。

     

    Abstract: For the problem of poor recognition performance in short-time acoustic scene recognition task, a method using multi-scale feature fusion is proposed. Firstly, this method takes the sum and difference of the stereo audio's left and right channels as input. And a long frame length is used for frame processing to ensure that the extracted frame-level features contain enough audio information. Then, the features are input frame by frame into a one-dimensional convolutional neural network which uses multi-scale feature fusion to make full use of the shallow, middle and deep embedding at different scales in the network. Finally, all the frame-level soft labels are integrated to obtain the scene label of the audio. Experimental results show that the accuracy of this method on the Detection and Classification of Acoustic Scenes and Events(DCASE) 2021 short-time audio scene dataset is 79.02%, which achieves state-of-the-art performance on this dataset so far.

     

/

返回文章
返回