• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
UT Shield
The University of Texas at Austin
  • Home
  • Schedule
    • Current Semester
    • Past Semesters
  • Information

2017 Fall Seminar

December 5, 2017, Filed Under: 2017 Fall Seminar, Seminars

Glaring Gaps in Neurally-Inspired Computing

Speaker: Mikko Lipasti, University of Wisconsin Madison

Date: December 5, 2017

November 28, 2017, Filed Under: 2017 Fall Seminar, Seminars

Computer Systems for Neuroscience

Speaker: Abhishek Bhattacharjee, Rutgers University

Date: November 28, 2017

Time: 3:30 pm

Location: POB 2.402

Title: Computer Systems for Neuroscience


Abstract

Computer systems are vital to advancing our understanding of the brain. From embedded chips in brain implants, to server systems used for large-scale brain modeling frameworks, computer systems help shed light on the link between low-level neuronal activity and the brain’s behavioral and cognitive operation. This talk will show the challenges facing such computer systems. We will discuss the extreme energy needs of hardware used in brain implants, and the challenges posed by the computational and data requirements of large-scale brain modeling software. To address these problems, we will discuss recent results from my lab on augmenting hardware to operate harmoniously with software and even the underlying biology of these systems. For example, we will show that perceptron-based hardware branch predictors can be co-opted to predict neuronal spiking activity and can guide power management on brain implants. Further, we will show that the virtual memory layer is a performance bottleneck in server systems for brain modeling software, but that intelligent coordination with the OS layer can counteract many of the memory management problems faced by these systems. Overall, this talk offers techniques that can continue to aid the development of neuroscientific tools. 


Speaker Biography

Abhishek Bhattacharjee is an Associate Professor of Computer Science at Rutgers University. His is also a 2017 CV Starr Fellow at the Princeton Neuroscience Institute. His research interests are at the hardware/software interface. Some of the research results from his lab are in widespread commercial use and are implemented in AMD’s latest line of processors and the Linux OS. Abhishek is a recipient of the NSF’s CAREER award, research awards from Google and VMware, and the Chancellor’s Award for Faculty Excellence in Research at Rutgers. 

October 31, 2017, Filed Under: 2017 Fall Seminar, Seminars

Teaching Deployed Data Centers New Tricks

Speaker: Derek Chiou, Microsoft

Date: October 31, 2017    

Time: 3:45 pm

Location: Avaya Auditorium, POB 2.302

Title: Teaching Deployed Data Centers New Tricks


Abstract

The cloud is an area of intense competition and rapid innovation.  Cloud companies are highly incentivized to provide useful, performant differentiated services rapidly and cost-effectively. In this talk, I will describe Microsoft’s approach to enable such services using strategically placed reconfigurable logic, discuss how that introduction can fundamentally change our data center architecture, and show some specific uses and describe their benefits. 


Speaker Biography

Derek Chiou is a Partner Architect at Microsoft where he leads the Azure Cloud Silicon team working on FPGAs and ASICs for data center applications and infrastructure, and a researcher in the Electrical and
Computer Engineering Department at The University of Texas at Austin.  Until 2016, he was an associate professor at UT.  His research areas are novel uses of FPGAs, high performance computer simulation, 
rapid system design, computer architecture, parallel computing, Internet router architecture, and network processors.  Before going to UT, Dr. Chiou was a system architect and lead the performance 
modeling team at Avici Systems, a manufacturer of terabit core routers.  Dr. Chiou received his Ph.D., S.M. and S.B. degrees in Electrical Engineering and Computer Science from MIT.

October 31, 2017, Filed Under: 2017 Fall Seminar, Seminars

Maximizing Server Efficiency: from microarchitecture to machine-learning accelerators

Speaker: Mike Ferdman, Stony Brook University

Date: October 31, 2017

Time: 2:30

Location: POB 2.402

Title: Maximizing Server Efficiency: from microarchitecture to machine-learning accelerators


Abstract

Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection.  As a result, accelerators for efficiently evaluating DNNs are rapidly growing in popularity.  Our work in this area focuses on two key challenges: minimizing the off-chip data transfer and maximizing the utilization of the computation units.  In this talk, I will present an overview of my research work on understanding and improving the efficiency of server systems, and dive deeper into our recent results on FPGA-based server accelerators for machine learning.


Speaker Biography

Mike Ferdman is an Assistant Professor of Computer Science at Stony Brook University, where he co-directs the Computer Architecture Stony Brook (COMPAS) Lab. His research interests are in the area of computer architecture, with particular emphasis on the server computing stack. His current projects center on FPGA accelerators for machine learning, emerging memory technologies, and speculative micro-architectural techniques. Mike received a BS in Computer Science, and BS, MS, and PhD in Electrical and Computer Engineering from Carnegie Mellon University.

Primary Sidebar

Current Semester

[Series 01] FEATHER: A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching

Past Semesters

Welcome to CompArch 2024 Fall

2024 Spring

2022 Fall

2020 Spring

2019 Spring

2019 Fall

2018 Fall

2017 Spring

2017 Fall

2016 Spring

Prior Semesters

UT Home | Emergency Information | Site Policies | Web Accessibility | Web Privacy | Adobe Reader

© The University of Texas at Austin 2025