<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Speech Processing | Tony Zhang</title><link>https://tony233.netlify.app/tag/speech-processing/</link><atom:link href="https://tony233.netlify.app/tag/speech-processing/index.xml" rel="self" type="application/rss+xml"/><description>Speech Processing</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Sun, 18 Jun 2023 03:12:37 +0000</lastBuildDate><item><title>MERLIon CCS Challenge: A English-Mandarin code-switching child-directed speech corpus for language identification and diarization</title><link>https://tony233.netlify.app/project/merlion-ccs-challenge-a-english-mandarin-code-switching-child-directed-speech-corpus-for-language-identification-and-diarization/</link><pubDate>Sun, 18 Jun 2023 03:12:37 +0000</pubDate><guid>https://tony233.netlify.app/project/merlion-ccs-challenge-a-english-mandarin-code-switching-child-directed-speech-corpus-for-language-identification-and-diarization/</guid><description>&lt;p>To enhance the reliability and robustness of language identification (LID) and language diarization (LD) systems for heterogeneous populations and scenarios, there is a need for speech processing models to be trained on datasets that feature diverse language registers and speech patterns. We present the MERLIon CCS challenge, featuring a first-of-its-kind Zoom video call dataset of parent-child shared book reading, of over 30 hours with over 300 recordings, annotated by multilingual transcribers using a high-fidelity linguistic transcription protocol.&lt;/p></description></item><item><title>PQLM - Multilingual Decentralized Portable Quantum Language Model</title><link>https://tony233.netlify.app/project/pqlm-multilingual-decentralized-portable-quantum-language-model/</link><pubDate>Mon, 07 Nov 2022 18:40:34 +0000</pubDate><guid>https://tony233.netlify.app/project/pqlm-multilingual-decentralized-portable-quantum-language-model/</guid><description>&lt;p>With careful manipulation, malicious agents can reverse engineer private information encoded in pre-trained language models. Security concerns motivate the development of quantum pre-training. In this work, we propose a highly portable quantum language model (PQLM) that can easily transmit information to downstream tasks on classical machines. The framework consists of a cloud PQLM built with random Variational Quantum Classifiers (VQC) and local models for downstream applications. We demonstrate the ad hoc portability of the quantum model by extracting only the word embeddings and effectively applying them to downstream tasks on classical machines. Our PQLM exhibits comparable performance to its classical counterpart on both intrinsic evaluation (loss, perplexity) and extrinsic evaluation (multilingual sentiment analysis accuracy) metrics. We also perform ablation studies on the factors affecting PQLM performance to analyze model stability. Our work establishes a theoretical foundation for a portable quantum pre-trained language model that could be trained on private data and made available for public use with privacy protection guarantees.&lt;/p></description></item><item><title>End-to-End Lyrics Recognition with Self-supervised Learning</title><link>https://tony233.netlify.app/project/end-to-end-lyrics-recognition-with-self-supervised-learning/</link><pubDate>Mon, 07 Nov 2022 18:23:45 +0000</pubDate><guid>https://tony233.netlify.app/project/end-to-end-lyrics-recognition-with-self-supervised-learning/</guid><description>&lt;p>Lyrics recognition is an important task in music processing. Despite traditional algorithms such as the hybrid HMM- TDNN model achieving good performance, studies on applying end-to-end models and self-supervised learning (SSL) are limited. In this paper, we first establish an end-to-end baseline for lyrics recognition and then explore the performance of SSL models on lyrics recognition task. We evaluate a variety of upstream SSL models with different training methods (masked reconstruction, masked prediction, autoregressive reconstruction, and contrastive learning). Our end-to-end self-supervised models, evaluated on the DAMP music dataset, outperform the previous state-of-the-art (SOTA) system by 5.23% for the dev set and 2.4% for the test set even without a language model trained by a large corpus. Moreover, we investigate the effect of background music on the performance of self-supervised learning models and conclude that the SSL models cannot extract features efficiently in the presence of background music. Finally, we study the out-of- domain generalization ability of the SSL features considering that those models were not trained on music datasets.&lt;/p></description></item></channel></rss>