Accelerated and Deep Expectation-Maximization Method for Quantized Linear Regression

主讲人:邵明杰 优青副研究员
时间:2025年4月9日11:00—11:30   地点:数学院南楼N204

【报告摘要】In this talk, we delve into the realm of parameter estimation from quantized data, with a particular focus on quantized linear regression (QLR). QLR finds its applications in various domains, including signal processing, data analysis, and wireless communication. Our primary objective is to explore the maximum-likelihood (ML) estimation for QLR and its solving algorithm: the expectation maximization (EM) algorithm. To begin, we investigate the convergence rate of the EM algorithm for the QLR problem. By establishing a link between EM and the proximal gradient method, we gain valuable insights into the convergence analysis. Notably, we uncover how system parameters influence the rate at which EM converges. This understanding paves the way for developing novel accelerated and/or inexact EM schemes. We present convergence rate results to validate the efficacy of these new schemes. Furthermore, we introduce a deep EM algorithm. This novel algorithm leverages an efficient structured deep neural network that is based on the principles of EM. By integrating deep learning techniques into the EM framework, we aim to enhance the algorithm's performance and computational efficiency. Simulation results unequivocally demonstrate that our algorithms outperform the standard EM counterpart in terms of efficiency.

 

【报告人简介】邵明杰,中国科学院数学与系统科学研究院优秀青年副研究员。主要研究方向为统计信号处理与机器学习、最优化方法、无线通信等。在信号处理领域IEEE TSP,IEEE JSTSP等顶级期刊和IEEE ICASSP等旗舰会议上发表论文40余篇,多篇论文入选期刊“Top 50 Popular Articles”列表。