<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Tony Zhang</title><link>https://tony233.netlify.app/project/</link><atom:link href="https://tony233.netlify.app/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Sun, 18 Jun 2023 03:12:37 +0000</lastBuildDate><item><title>MERLIon CCS Challenge: A English-Mandarin code-switching child-directed speech corpus for language identification and diarization</title><link>https://tony233.netlify.app/project/merlion-ccs-challenge-a-english-mandarin-code-switching-child-directed-speech-corpus-for-language-identification-and-diarization/</link><pubDate>Sun, 18 Jun 2023 03:12:37 +0000</pubDate><guid>https://tony233.netlify.app/project/merlion-ccs-challenge-a-english-mandarin-code-switching-child-directed-speech-corpus-for-language-identification-and-diarization/</guid><description>&lt;p>To enhance the reliability and robustness of language identification (LID) and language diarization (LD) systems for heterogeneous populations and scenarios, there is a need for speech processing models to be trained on datasets that feature diverse language registers and speech patterns. We present the MERLIon CCS challenge, featuring a first-of-its-kind Zoom video call dataset of parent-child shared book reading, of over 30 hours with over 300 recordings, annotated by multilingual transcribers using a high-fidelity linguistic transcription protocol.&lt;/p></description></item><item><title>Twin-S: A Digital Twin for Skull-base Surgery</title><link>https://tony233.netlify.app/project/twin-s-a-digital-twin-for-skull-base-surgery/</link><pubDate>Wed, 23 Nov 2022 20:50:03 +0000</pubDate><guid>https://tony233.netlify.app/project/twin-s-a-digital-twin-for-skull-base-surgery/</guid><description>&lt;p>Purpose: Digital twins are virtual interactive models of the real world, exhibiting identical behavior and properties. In surgical applications, computational analysis from digital twins can be used, for example, to enhance situational awareness. Methods: We present a digital twin framework for skull-base surgeries, named Twin-S, which can be integrated within various image-guided interventions seamlessly. Twin-S combines high-precision optical tracking and real-time simulation. We rely on rigorous calibration routines to ensure that the digital twin representation precisely mimics all real-world processes. Twin-S models and tracks the critical components of skull-base surgery, including the surgical tool, patient anatomy, and surgical camera. Significantly, Twin-S updates and reflects real-world drilling of the anatomical model in frame rate. Results: We extensively evaluate the accuracy of Twin-S, which achieves an average 1.39 mm error during the drilling process. We further illustrate how segmentation masks derived from the continuously updated digital twin can augment the surgical microscope view in a mixed reality setting, where bone requiring ablation is highlighted to provide surgeons additional situational awareness. Conclusion: We present Twin-S, a digital twin environment for skull-base surgery. Twin-S tracks and updates the virtual model in real-time given measurements from modern tracking technologies. Future research on complementing optical tracking with higher-precision vision-based approaches may further increase the accuracy of Twin-S.&lt;/p></description></item><item><title>PQLM - Multilingual Decentralized Portable Quantum Language Model</title><link>https://tony233.netlify.app/project/pqlm-multilingual-decentralized-portable-quantum-language-model/</link><pubDate>Mon, 07 Nov 2022 18:40:34 +0000</pubDate><guid>https://tony233.netlify.app/project/pqlm-multilingual-decentralized-portable-quantum-language-model/</guid><description>&lt;p>With careful manipulation, malicious agents can reverse engineer private information encoded in pre-trained language models. Security concerns motivate the development of quantum pre-training. In this work, we propose a highly portable quantum language model (PQLM) that can easily transmit information to downstream tasks on classical machines. The framework consists of a cloud PQLM built with random Variational Quantum Classifiers (VQC) and local models for downstream applications. We demonstrate the ad hoc portability of the quantum model by extracting only the word embeddings and effectively applying them to downstream tasks on classical machines. Our PQLM exhibits comparable performance to its classical counterpart on both intrinsic evaluation (loss, perplexity) and extrinsic evaluation (multilingual sentiment analysis accuracy) metrics. We also perform ablation studies on the factors affecting PQLM performance to analyze model stability. Our work establishes a theoretical foundation for a portable quantum pre-trained language model that could be trained on private data and made available for public use with privacy protection guarantees.&lt;/p></description></item><item><title>A New Approach to Extract Fetal Electrocardiogram Using Affine Combination of Adaptive Filters</title><link>https://tony233.netlify.app/project/a-new-approach-to-extract-fetal-electrocardiogram-using-affine-combination-of-adaptive-filters/</link><pubDate>Mon, 07 Nov 2022 18:33:43 +0000</pubDate><guid>https://tony233.netlify.app/project/a-new-approach-to-extract-fetal-electrocardiogram-using-affine-combination-of-adaptive-filters/</guid><description>&lt;p>The detection of abnormal fetal heartbeats during pregnancy is important for monitoring the health conditions of the fetus. While adult ECG has made several advances in modern medicine, noninvasive fetal electrocardiography (FECG) re- mains a great challenge. In this paper, we introduce a new method based on affine combinations of adaptive filters to extract FECG signals. The affine combination of multiple filters is able to precisely fit the reference signal, and thus obtain more accurate FECGs. We proposed a method to combine the Least Mean Square (LMS) and Recursive Least Squares (RLS) filters. Our approach found that the Combined Recursive Least Squares (CRLS) filter achieves the best performance among all proposed combinations. In addition, we found that CRLS is more advantageous in extracting FECG from abdominal electrocardiograms (AECG) with a small signal-to-noise ratio (SNR). Compared with the state-of-the-art MSF-ANC method, CRLS shows improved performance. The sensitivity, accuracy and F1 score are improved by 3.58%, 2.39% and 1.36%, respectively.&lt;/p></description></item><item><title>End-to-End Lyrics Recognition with Self-supervised Learning</title><link>https://tony233.netlify.app/project/end-to-end-lyrics-recognition-with-self-supervised-learning/</link><pubDate>Mon, 07 Nov 2022 18:23:45 +0000</pubDate><guid>https://tony233.netlify.app/project/end-to-end-lyrics-recognition-with-self-supervised-learning/</guid><description>&lt;p>Lyrics recognition is an important task in music processing. Despite traditional algorithms such as the hybrid HMM- TDNN model achieving good performance, studies on applying end-to-end models and self-supervised learning (SSL) are limited. In this paper, we first establish an end-to-end baseline for lyrics recognition and then explore the performance of SSL models on lyrics recognition task. We evaluate a variety of upstream SSL models with different training methods (masked reconstruction, masked prediction, autoregressive reconstruction, and contrastive learning). Our end-to-end self-supervised models, evaluated on the DAMP music dataset, outperform the previous state-of-the-art (SOTA) system by 5.23% for the dev set and 2.4% for the test set even without a language model trained by a large corpus. Moreover, we investigate the effect of background music on the performance of self-supervised learning models and conclude that the SSL models cannot extract features efficiently in the presence of background music. Finally, we study the out-of- domain generalization ability of the SSL features considering that those models were not trained on music datasets.&lt;/p></description></item></channel></rss>