Prof. Geoffrey Ye Li
Title: Deep Learning in Wireless Communications
Abstract: It has been demonstrated recently that deep learning (DL) has great potentials to break the bottleneck of the conventional communication systems. In this talk, we present our recent work in DL in wireless communications, including physical layer processing and resource allocation.
DL can improve the performance of each individual (traditional) block in a conventional communication system or jointly optimize the whole transmitter or receiver. Therefore, we can categorize the applications of DL in physical layer communications into with and without block processing structures. For DL based communication systems with block structures, we present joint channel estimation and signal detection based on a fully connected deep neural network, model-drive DL for signal detection. For those without block structures, we provide our recent endeavors in developing end-to-end learning communication systems with the help of deep reinforcement learning (DRL) and generative adversarial net (GAN).
Judicious resource (spectrum, power, etc.) allocation can significantly improve efficiency of wireless networks. The traditional wisdom is to explicitly formulate resource allocation as an optimization problem and then exploit mathematical programming to solve it to a certain level of optimality. Deep learning represents a promising alternative due to its remarkable power to leverage data for problem solving and can help solve optimization problems for resource allocation or can be directly used for resource allocation. We will first present our research results in using deep learning to reduce the complexity of mixed integer non-linear programming (MINLP). We will then discuss how to use deep reinforcement learning directly for wireless resource allocation with application in vehicular networks.
Bio: Dr. Geoffrey Ye Li is the Chair Professor in Wireless Systems in Department of EEE, Imperial College London. Before joining Imperial College London in 2020, he was a professor with Georgia Institute of Technology, GA, USA, for 20 years and a Principal Technical Staff Member with AT&T (Bell) Labs – Research in New Jersey, USA, for around 5 years. He is currently focusing on machine learning and statistical signal processing for wireless communications. His research topics in the past couple decades include machine learning for wireless signal detection and resource allocation, cognitive radios, cross-layer optimisation for spectrum- and energy-efficient wireless networks, OFDM and MIMO techniques for wireless systems, and blind signal processing.
Dr. Geoffrey Ye Li was awarded IEEE Fellow for his contributions to signal processing for wireless communications in 2005. He won several prestigious awards from IEEE Signal Processing Society (Donald G. Fink Overview Paper Award in 2017), IEEE Vehicular Technology Society (James Evans Avant Garde Award in 2013 and Jack Neubauer Memorial Award in 2014), and IEEE Communications Society (Stephen O. Rice Prize Paper Award in 2013, Award for Advances in Communication in 2017, and Edwin Howard Armstrong Achievement Award in 2019). He also received the 2015 Distinguished ECE Faculty Achievement Award from Georgia Tech. He has been recognised as the Highly-Cited Researcher by Thomson Reuters almost every year.
Prof. Ahmed Alkhateeb
Title: Deep Learning for MIMO Systems in 5G and Beyond: Enabling Scalability, Mobility, and Reliability
Abstract: Millimeter-wave (mmWave) and massive MIMO are key enabling technologies for 5G and beyond. Scaling the number of antennas up, however, is subject to critical challenges such as the large training overhead associated with estimating the channels and the high sensitivity to blockages. These challenges make it difficult for mmWave and massive MIMO systems to support applications like virtual/augmented reality and vehicular communications that have high-mobility and strict-reliability constraints. The first part of this talk will present how deep learning provides a promising solution to these problems through the concept of channel mapping, which has several interesting applications such as predicting the downlink channels directly from the uplink channels in FDD massive MIMO or predicting the mmWave beams using the sub-6GHz channels. Further, this solution can enhance the system reliability through, for example, future blockage prediction and proactive hand-off. The second part of the talk will explore the potential of leveraging machine/deep learning tools for building environment and hardware aware beamforming and measurement codebooks. Developing codebooks that adapt to the environment geometry, hardware, and user distribution can (i) reduce the beam training overhead by focusing on the important directions in the space, (ii) relax the calibration requirements for large antenna arrays, and (ii) improve the beamforming performance in scenarios with non-line-of-sight links or non-stationary channels, among other interesting gains.
Bio: Ahmed Alkhateeb received his B.S. degree (distinction with honor) and M.S. degree in Electrical Engineering from Cairo University, Egypt, in 2008 and 2012, and his Ph.D. degree in Electrical Engineering from The University of Texas at Austin, USA, in August 2016, under the supervision of Prof. Robert W. Heath. In Sept. 2016- Dec. 2017, he was a Wireless Communications Researcher at the Connectivity Lab, Facebook, in Menlo Park, CA. He joined Arizona State University (ASU) in Spring 2018, where he is currently an Assistant Professor in the School of Electrical, Computer and Energy Engineering. He has held R&D internships at FutureWei Technologies (Huawei) in Chicago, IL, and Samsung Research America (SRA) in Dallas, TX. His research interests are in the broad areas of wireless communications, communication theory, signal processing, machine learning, and applied math. Dr. Alkhateeb is the recipient of the 2012 MCD Fellowship from The University of Texas at Austin and the 2016 IEEE Signal Processing Society Young Author Best Paper Award for his work on hybrid precoding and channel estimation in millimeter wave communication systems.
Prof. Sundeep Rangan
Abstract: Communication in the millimeter wave (mmWave) bands offers the potential for massive data rates at low latencies, but is fraught with challenges including high device power consumption, complex propagation, rapid channel dynamics, and need to support beam tracking. At the same time, the computational and storage capabilities of network nodes and devices is rapidly increasing, making available a huge amount of site-specific or device-specific data that can be exploited by machine learning methods. This talk will review some recent works in data driven, ML methods in for mmWave communication including neural network generative channel models and LSTM beam tracking. We will also discuss ongoing work in using mmWave to enable mobile AI in computational offloading for robotics and mobile visual perception systems.
Bio: Dr. Rangan received the B.A.Sc. at the University of Waterloo, Canada and the M.Sc. and Ph.D. at the University of California, Berkeley, all in Electrical Engineering. He has held postdoctoral appointments at the University of Michigan, Ann Arbor and Bell Labs. In 2000, he co-founded (with four others) Flarion Technologies, a spin-off of Bell Labs, that developed Flash OFDM, the first cellular OFDM data system and pre-cursor to 4G cellular systems including LTE and WiMAX. In 2006, Flarion was acquired by Qualcomm Technologies. Dr. Rangan was a Director of Engineering at Qualcomm involved in OFDM infrastructure products. He joined the ECE department at NYU Tandon (formerly NYU Polytechnic) in 2010. He is a Fellow of the IEEE and the Associate Director of NYU WIRELESS, an industry-academic research center on next-generation wireless systems.
Prof. Walid Saad
Date: Friday, March 12, 2021
Time: 9:00 AM (CST; UTC -6)
Presentation Slides: Download
Recorded Talk: Watch on YouTube
Title: Brainstorming Generative Adversarial Networks (BGANs): Framework and Application to Wireless Networks
Abstract: Due to major communication, privacy, and scalability challenges stemming from the emergence of large-scale Internet of Things services, machine learning is witnessing a major departure from traditional centralized cloud architectures toward a distributed machine learning (ML) paradigm where data is dispersed and processed across multiple edge devices. A prime example of this emerging distributed ML paradigm is Google’s renowned federated learning framework. Despite the tremendous recent interest in distributed ML, remarkably, prior work in the area remains largely focused on the development of distributed ML algorithms for inference and classification tasks. In contrast, in this talk, we introduce the novel framework of brainstorming generative adversarial networks (BGANs) that constitutes one of the first implementations of distributed, multi-agent generative GAN models that does not rely on a centralized parameter server. We show how BGAN allows multiple agents to gain information from one another, in a fully distributed manner, without sharing their real datasets but by “brainstorming” their generated data samples. We then demonstrate the higher accuracy and scalability of BGAN compared to the state of the art through extensive experiments. We then illustrate how BGAN can be used for analyzing a simple but meaningful millimeter wave channel modeling problem in wireless networks that use unmanned aerial vehicles (UAVs). If time permits, we will also discuss some other applications of the general concept of a GAN in the context of low-latency wireless resource allocation problems.
Bio: Walid Saad received his Ph.D degree from the University of Oslo in 2010. Currently, he is an Associate Professor at the Department of Electrical and Computer Engineering at Virginia Tech, where he leads the Network Science, Wireless, and Security (NetSciWiS) laboratory, within the Wireless@VT research group. His research interests include wireless networks, machine learning, game theory, cybersecurity, unmanned aerial vehicles, and cyber-physical systems. Dr. Saad is a Fellow of the IEEE and an IEEE Distinguished Lecturer. He is also the recipient of the NSF CAREER award in 2013, the AFOSR summer faculty fellowship in 2014, and the Young Investigator Award from the Office of Naval Research (ONR) in 2015. He was the author/co-author of six conference best paper awards at WiOpt in 2009, ICIMP in 2010, IEEE WCNC in 2012, IEEE PIMRC in 2015, IEEE SmartGridComm in 2015, and EuCNC in 2017. He is the recipient of the 2015 Fred W. Ellersick Prize from the IEEE Communications Society, of the 2017 IEEE ComSoc Best Young Professional in Academia award, and of the 2018 IEEE ComSoc Radio Communications Committee Early Achievement Award. From 2015-2017, Dr. Saad was named the Stephen O. Lane Junior Faculty Fellow at Virginia Tech and, in 2017, he was named College of Engineering Faculty Fellow. He currently serves as an editor for the IEEE Transactions on Wireless Communications, IEEE Transactions on Communications, IEEE Transactions on Mobile Computing, and IEEE Transactions on Information Forensics and Security.
Title: Recent Results on End-to-end Learning for the Physical and Medium Access Layers
Abstract: In this talk, I will discuss two recent papers from our department on the topic of machine learning for wireless communications. The first  deals with end-to-end learning over a SISO frequency-selective fading channel and shows that a fully convolutional neural receiver together with learned geometric shaping at the transmitter enables pilotless transmissions with a coded-BER performance similar to that of a pilot-based baseline. Such an approach could hence be an interesting component for beyond-5G communication systems as it removes the need and associated control overhead for demodulation reference signals (DMRSs). The second paper  explores the idea of protocol learning for the MAC layer and, in particular, joint learning of optimal signalling and wireless channel access. We use Multiagent Reinforcement Learning (MARL) techniques to study if radios can learn to use a pre-given signalling to emerge a channel access policy as an intermediate step towards evolving their own. https://arxiv.org/abs/2009.05261  https://arxiv.org/abs/2007.09948
Bio: Jakob Hoydis is currently head of a research department at Nokia Bell Labs, France, focusing on radio systems and artificial intelligence. Prior to this, he was co-founder and CTO of the social network SPRAED and worked for Alcatel-Lucent Bell Labs in Stuttgart, Germany. He received the diploma degree (Dipl.-Ing.) in electrical engineering and information technology from RWTH Aachen University, Germany, and the Ph.D. degree from Supéléc, Gif-sur-Yvette, France, in 2008 and 2012, respectively. His research interests are in the areas of machine learning, cloud computing, SDR, large random matrix theory, information theory, signal processing, and their applications to wireless communications. He is recipient of the 2019 VTG IDE Johann-Philipp-Reis Prize, the 2019 IEEE SEE Glavieux Prize, the 2018 IEEE Marconi Prize Paper Award, the 2015 IEEE Leonard G. Abraham Prize, the IEEE WCNC 2014 best paper award, the 2013 VDE ITG Förderpreis Award, and the 2012 Publication Prize of the Supéléc Foundation. He has received the 2018 Nokia AI Innovation Award, as well as the 2018 and 2019 Nokia France Top Inventor Awards. He is a co-author of the textbook “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency” (2017). He is currently chair of the IEEE COMSOC Emerging Technology Initiative on Machine Learning, Editor of the IEEE Transactions on Wireless Communications, as well as Area Editor of the IEEE Journal on Selected Areas in Communications Series on Machine Learning in Communications and Networks.
Prof. Cong Shen
Date: Thursday, November 5, 2020
Time: 11:00 AM (CST; UTC -6)
Presentation Slides: View
Recorded Talk: Watch on YouTube
Title: Flying under the radar: federated learning over noisy channels
Abstract: Does Federated Learning (FL) work when both uplink and downlink communications have errors? How much communication “noise” can FL handle and what is its impact to the learning performance? We attempt to address these practically important questions by explicitly incorporating both uplink and downlink noisy channels in the FL pipeline. We present a convergence analysis of FL over simultaneous uplink and downlink noisy channels, and characterize the sufficient conditions for FL to maintain the same convergence rate scaling as the ideal case of no communication error. The analysis reveals that, in order to maintain the O(1/T) convergence rate of FedAvg with perfect communications, the uplink and downlink signal-to-noise-ratio (SNR) should be controlled such that they scale as O(t^2) where t is the index of communication rounds. This result is very general and we show two examples in the communication design that reflect this principle: (1) uplink and downlink model quantization; (2) power control for analog aggregation. Lastly, if time permits, I will also talk about our recent work in handling the device heterogeneity problem by optimizing the server aggregation.
Bio: Cong Shen received his B.S. and M.S. degrees, in 2002 and 2004 respectively, from the Department of Electronic Engineering, Tsinghua University, China. He obtained the Ph.D. degree from the Electrical Engineering Department, University of California Los Angeles (UCLA), in 2009. Prior to joining the Electrical and Computer Engineering Department at University of Virginia, Dr. Shen was a professor in the School of Information Science and Technology at University of Science and Technology of China (USTC). He also has extensive industry experience, having worked for Qualcomm Research, SpiderCloud Wireless, Silvus Technologies, and Xsense.ai, in various full time and consulting roles. His general research interests are in the area of communication theory, wireless communications, and machine learning. He was the recipient of the “Excellent Paper Award” in the 9th International Conference on Ubiquitous and Future Networks (ICUFN 2017). Currently, he serves as an editor for the IEEE Transactions on Wireless Communications, and editor for the IEEE Wireless Communications Letters.
Bio: Prof. Deniz Gündüz received the B.S. degree in electrical and electronics engineering from METU, Ankara, Turkey, in 2002, and the M.S. and Ph.D. degrees in electrical engineering from NYU Polytechnic School of Engineering (formerly Polytechnic University), Brooklyn, NY, in 2004 and 2007, respectively. He is currently a Reader (Associate Professor) in the Electrical and Electronic Engineering Department of Imperial College London, and is leading the Information Processing and Communications Lab. He is also the Deputy Head of the Intelligent Systems and Networks Group. Previously he was a Research Associate at CTTC, Barcelona, Spain, a Consulting Assistant Professor at the Department of Electrical Engineering, Stanford University, and a postdoctoral Research Associate at the Department of Electrical Engineering, Princeton University. He also held a visiting research collaborator position at Princeton University from October 2009 until November 2011. Dr. Gündüz is the recipient of the 2017 Early Achievement Award of the IEEE Communication Society – Communication Theory Technical Committee (CTTC), a Starting Grant of the European Research Council (ERC) in 2015, the 2014 IEEE Communication Society Best Young Researcher Award for the Europe, Middle East and Africa Region, and the 2008 Alexander Hessel Award awarded by the Electrical and Computer Engineering Department of New York University Polytechnic School of Engineering for the best Ph.D. dissertation. He is also a recipient of the Best Paper Awards at the 2016 IEEE Wireless Communications and Networking Conference (WCNC), and 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), and the Best Student Paper Award at the 2018 IEEE WCNC and 2007 International Symposium on Information Theory (ISIT).
Title: Resource Management in Wireless Networks through the Lens of Information Theory and Machine Learning
Abstract: In this talk, we present algorithms for radio resource management (RRM) in ultra-dense wireless networks, where a group of transmit points (TPs) intend to serve multiple user equipment devices (UEs) using the same wireless resource. We start with a centralized RRM algorithm, which is derived based on the information-theoretic optimality condition for treating interference as noise. We then introduce a scalable distributed RRM approach using multi-agent deep reinforcement learning (RL). We equip each TP in the network with a deep RL agent, which receives partial delayed observations from its own associated UEs, while also exchanging observations with its neighboring agents. Based on these observations, each TP decides on which user to serve and what transmit power level to use at each scheduling interval. We finally discuss how graph neural network (GNN) architectures can be leveraged to exploit the underlying network topology in order to learn power control policies in an unsupervised manner.
Bio: Navid Naderi is a Research Scientist in the Information & Systems Sciences Lab at HRL Laboratories in Malibu, CA. Prior to that, he was a Research Scientist at Intel Labs in Santa Clara, CA. He received his PhD in Electrical Engineering from the University of Southern California, Los Angeles, CA in 2016 and my MSc in Electrical and Computer Engineering from Cornell University, Ithaca, NY in 2014, both under Prof. Salman Avestimehr. His research interests include development and analysis of model-based and learning-based radio resource allocation algorithms for 5G and beyond. Dr. Naderializadeh ranked first in the Iranian Nationwide University entrance exam in 2007. He was the recipient of Jacobs Scholarship in 2011. He was selected as a 2015-16 Ming Hsieh Institute Ph.D. Scholar. He has also been a finalist in the Shannon Centennial Student Competition at Nokia Bell Labs in 2016.
Title: Trainable Communication Systems: From Theory to Practice (and back again)
Abstract: We revisit the fundamental problem of physical layer communications, namely reproducing at one point a message selected at another point, to finally arrive at a trainable system that inherently learns to communicate and adapts to any channel environment. As such, we realize a data-driven system design, based on deep learning algorithms, leading to a universal framework that allows end-to-end optimization of the whole data-link without the need for prior mathematical modeling and analysis. A trainable communication system inherently tolerates and even exploits effects which are difficult to model, such as hardware imperfections and channel uncertainties. We show that such systems not only enjoy a competitive and in some cases even superior performance, but facilitate a simplified design flow due to their conceptual elegance and, hence, may trigger a paradigm shift of how we design future communication systems. We thus pose the seemingly simple, naive, yet in fact rather complicated and attractive research question: Can we learn to communicate?
The goal of this talk is to provide an introduction to the rapidly growing field of end-to-end learning of physical layer communications. For this, we reinterpret transceiver signal-processing blocks (e.g., quantization, coding, modulation, detection) as neural networks and show that this idea enables data-driven communication systems that perpetually learn and adapt to (m)any environment(s). Further, we show that the practical realization of end-to-end training of communication systems is fundamentally limited by its accessibility of the channel gradient. To overcome this major burden, the idea of generative adversarial networks (GANs) that learn to mimic the actual channel behavior has been recently proposed in the literature. Contrarily to handcrafted classical channel modeling, which can never fully capture the real world, GANs promise, in principle, the ability to learn any physical impairment, enabled by the data-driven learning algorithm. We verify the concept of GAN-based autoencoder training in actual over-the air (OTA) measurements.
In the second part of this talk, we show how training of autoencoder-based communication systems on the bit-wise mutual information allows seamless integration with practical bit metric decoding receivers, as well as joint optimization of constellation shaping and labeling. Additionally, we present a fully differentiable neural iterative demapping and decoding structure which achieves significant gains on additive white Gaussian noise channels. Going one step further, we show that careful code design can lead to further performance improvements.
Bio: Sebastian Cammerer received the B.Sc. and M.Sc. degrees (Hons.) in electrical engineering and information technology from the University of Stuttgart, Germany, in 2013 and 2015, respectively, where he is currently pursuing the Ph.D. degree. During his years of study, he worked as a Research Assistant at multiple institutes of the University of Stuttgart. Since 2015, he has been a member of the research staff at the Institute of Telecommunications, University of Stuttgart. His main research topics are channel coding and machine learning for communications. Furthermore, his research interests are in the areas of modulation, parallelized computing for signal processing, and information theory. He is a recipient of the Best Publication Award of the University of Stuttgart 2019, the Anton- und Klara Röser Preis 2016, the Rohde&Schwarz Best Bachelor Award 2015, and the VDE-Preis 2016 for his master thesis.
Title: Learning to learn to communicate
Abstract: The application of supervised learning techniques for the design of the physical layer of a communication link is often impaired by the limited amount of pilot data available for each device; while the use of unsupervised learning is typically limited by the need to carry out a large number of training iterations. In this talk, meta-learning, or learning-to-learn, is introduced as a tool to alleviate these problems. The talk will consider an Internet-of-Things (IoT) scenario in which devices transmit sporadically using short packets with few pilot symbols over a fading channel. The number of pilots is generally insufficient to obtain an accurate estimate of the end-to-end channel, which includes the effects of fading and of the transmission-side distortion. To tackle this problem, pilots from previous IoT transmissions are used as meta-training data in order to train a demodulator that is able to quickly adapt to new end-to-end channel conditions from few pilots. Various state-of-the-art meta-learning schemes are adapted to the problem at hand and evaluated, including MAML, FOMAML, REPTILE, and CAVIA. Both offline and online solutions are developed.
Bio: Osvaldo Simeone is a Professor of Information Engineering with the Centre for Telecommunications Research at the Department of Engineering of King’s College London, where he directs the King’s Communications, Learning and Information Processing lab. He received an M.Sc. degree (with honors) and a Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2001 and 2005, respectively. From 2006 to 2017, he was a faculty member of the Electrical and Computer Engineering (ECE) Department at New Jersey Institute of Technology (NJIT), where he was affiliated with the Center for Wireless Information Processing (CWiP). His research interests include information theory, machine learning, wireless communications, and neuromorphic computing. Dr Simeone is a co-recipient of the 2019 IEEE Communication Society Best Tutorial Paper Award, the 2018 IEEE Signal Processing Best Paper Award, the 2017 JCN Best Paper Award, the 2015 IEEE Communication Society Best Tutorial Paper Award and of the Best Paper Awards of IEEE SPAWC 2007 and IEEE WRECOM 2007. He was awarded a Consolidator grant by the European Research Council (ERC) in 2016. His research has been supported by the U.S. NSF, the ERC, the Vienna Science and Technology Fund, as well as by a number of industrial collaborations. He currently serves in the editorial board of the IEEE Signal Processing Magazine and is the vice-chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society. He was a Distinguished Lecturer of the IEEE Information Theory Society in 2017 and 2018. Dr Simeone is a co-author of two monographs, two edited books published by Cambridge University Press, and more than one hundred research journal papers. He is a Fellow of the IET and of the IEEE.
For more information, please use the “HOME” link in the navigation bar above to check out the Upcoming Seminars page.