Confirmed Speakers

Distinguished Speakers

Stephen Boyd

Samsung Professor
School of Engineering
Stanford University, USA
IEEE Fellow, Member of the National Academy of Engineering of USA
Slides

Title: Convex Optimization: From embedded real-time to large-scale distributed

Abstract:  Convex optimization has emerged as useful tool for applications that include data analysis and model fitting, resource allocation, engineering design, network design and optimization, finance, and control and signal processing.

After an overview, the talk will focus on two extremes: real-time embedded convex optimization, and distributed convex optimization. Code generation can be used to generate extremely efficient and reliable solvers for small problems, that can execute in milliseconds or microseconds, and are ideal for embedding in real-time systems. At the other extreme, we describe methods for large-scale distributed optimization, which coordinate many solvers to solve enormous problems.

Bio:  Stephen P. Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering in the Information Systems Laboratory at Stanford University. He has courtesy appointments in the Department of Management Science and Engineering and the Department of Computer Science, and is member of the Institute for Computational and Mathematical Engineering. His current research focus is on convex optimization applications in control, signal processing, and circuit design.

Professor Boyd received an AB degree in Mathematics, summa cum laude, from Harvard University in 1980, and a PhD in EECS from U. C. Berkeley in 1985. In 1985 he joined the faculty of Stanford's Electrical Engineering Department. He has held visiting Professor positions at Katholieke University (Leuven), McGill University (Montreal), Ecole Polytechnique Federale (Lausanne), Tsinghua University (Beijing), Universite Paul Sabatier (Toulouse), Royal Institute of Technology (Stockholm), Kyoto University, Harbin Institute of Technology, NYU, MIT, UC Berkeley, and CUHK-Shenzhen. He holds an honorary doctorate from Royal Institute of Technology (KTH), Stockholm.More.

Inderjit S. Dhillon

Gottesman Family Centennial Professor
Departments of Computer Science & Mathematics
University of Texas at Austin, USA
IEEE Fellow, SIAM Fellow, ACM Fellow
Slides

Title: Proximal Newton Methods for Large-Scale Machine Learning

Abstract:  A common task in many large-scale machine learning problems is to optimize a sum of two functions: a smooth and convex loss function plus a non-smooth regularizer. To solve such large-scale problems, the pre-dominant methods of choice have been various first order methods, such as proximal gradient algorithms. For example, several first order methods have been proposed for the sparse inverse covarance estimation problem which requires optimization of a smooth log-determinant convex objective with a non-smooth L1-regularizer. In this talk, I will present a second-order method called the proximal Newton method. In order to increase the efficiency of this second-order method, we exploit structure in the problem, in particular its geometry by restricting the search for the Newton direction to a restricted "subspace". To make the exposition concrete, I will discuss in-depth the inverse covariance estimation problem, where we have developed a memory efficient method that greatly outperforms first-order methods and can solve million-dimensional problems (with a trillion parameters) on one computer. The proximal Newton method can be profitably applied to other modern machine learning problems, such as low-rank matrix completion, multi-task learning and robust PCA.

Joint work with C. J. Hsieh, M. Sustik, P. Ravikumar, R. Poldrack, S. Becker and P. Olsen

Bio:  Inderjit Dhillon is the Gottesman Family Centennial Professor of Computer Science and Mathematics at UT Austin, where he is also the Director of the ICES Center for Big Data Analytics. His main research interests are in big data, machine learning, network analysis, linear algebra and optimization. He received his B.Tech. degree from IIT Bombay, and Ph.D. from UC Berkeley. Inderjit has received several prestigious awards, including the ICES Distinguished Research Award, the SIAM Outstanding Paper Prize, the Moncrief Grand Challenge Award, the SIAM Linear Algebra Prize, the University Research Excellence Award, and the NSF Career Award. He has published over 120 journal and conference papers, and has served on the Editorial Board of the Journal of Machine Learning Research, the IEEE Transactions of Pattern Analysis and Machine Intelligence, Foundations and Trends in Machine Learning and the SIAM Journal for Matrix Analysis and Applications. Inderjit is an IEEE Fellow, a SIAM Fellow and an ACM Fellow.

Weinan E

Professor
Department of Mathematics and Program in Applied and Computational Mathematics
Princeton University, USA
SIAM Fellow, Member of the Chinese Academy of Sciences

Title: The Structure of Unstructured Data

Abstract:  TBA

Bio:  Weinan E received his Ph.D. from UCLA in 1989. After being a visiting member at the Courant Institute of NYU and the Institute for Advanced Study at Princeton, he joined the faculty at NYU in 1994. He is now a professor of mathematics at Princeton University, a position he has held since 1999. He is also a Changjiang professor at the Peking University and an associated faculty member of the department of operational research and financial engineering at Princeton University.

Weinan E's research interest is in multiscale and stochastic modeling. His work covers issues that include mathematical foundation of stochastic and multiscale modeling, design and analysis of of algorithms and applications to problems in various disciplines of science and engineering, with particular emphasis on the modeling of rare events, material sciences and fluid dynamics. In the last few years, his interest has shifted to multiscale models of unstructured data and the associated algorithms.

Weinan E is the recipient of the Presidential Early Career Awards for Scientists and Engineers (PECASE), the Feng Kang Prize, the SIAM R. E. Kleinman Prize, SIAM von Karman Prize and the ICIAM Collatz Prize. He is a member of the Chinese Academy of Sciences, a fellow of the American Mathematical Society, a SIAM fellow and a fellow of the Institute of Physics.

Wen Gao

Professor
Department of Computer Science and Technology
Peking University, China
IEEE Fellow, ACM Fellow, Member of Chinese Academy of Engineering
Slides

Title: Multimedia Big Data Processing for Intelligent Cities

Abstract:  Multimedia big data is a key base of intelligent cities, without a well-designed multimedia big data system, it is impossible to reach the goal. There are at least three parts in intelligent cities: the sensing network, the multimedia data center, and the decision procedure. The sensing network is a system which can collect multimedia data in real time, such as surveillance camera network which captures traffic image and video, Electronic Toll Collection network which captures car traffic information, Metro Card network which captures people traffic information, Medicare Card network which captures ill population information, etc.. The multimedia big data center is an infrastructure for multimedia big data storage and processing, with two major features, making the huge multimedia data converge into the big data for search and analysis, and giving solutions or suggestions to decision makers by machine learning. The decision procedure is the most important part in an intelligent city, which is a procedure for decision makers to make a decision solving the current problem or to make a good plan for improving the city environment in the future. In this talk, all three parts of intelligent cities will be discussed, and technologies related to first two parts will be reviewed.

Bio:  Wen Gao is Professor in Computer Science at Peking University and Vice President of the National Natural Science Foundation of China. Before joining Peking University in 2006, he was Professor in the Institute of Computing Technology, Chinese Academy of Sciences (1996-2005) and Professor in Harbin Institute of Technology (1991-1995). Wen Gao received his PhD in Electronic Engineering from University of Tokyo in 1991.

Prof. Wen Gao works in the areas of multimedia and computer vision, including video coding, video analysis, multimedia retrieval, face recognition, multimodal interfaces, and virtual reality. He published six books and over 700 technical articles in refereed journals and proceedings in above areas. He earned many awards including six National Awards in Science and Technology Achievements. He has been featured by IEEE Spectrum in June 2005 as one of the "Ten To Watch" among China's leading technologists. He is a fellow of IEEE, a fellow of ACM, and a member of Chinese Academy of Engineering.

Babak Hassibi

Gordon M. Binder/Amgen Professor
Electrical Engineering
California Institute of Technology, USA
IEEE Fellow
Slides

Title: Recovering Structured Signals via non-Smooth Convex Optimization: Precise Analysis using Comparison Lemmas

Abstract:  In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from (possibly) noisy measurements in a variety applications in statistics, signal processing, machine learning, etc. I will describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, etc.). The genesis of the theory can be traced back to an inconspicuous 1962 lemma of Slepian (on comparing Gaussian processes).

Bio:  Babak Hassibi is the Gordon M. Binder/Amgen Professor of Electrical Engineering at the California Institute of Technology. He has been at Caltech since 2001 and was Executive Officer for Electrical Engineering from 2008 to 2015. From 1998 to 2001 he was a Member of the Technical Staff at the Mathematical Sciences Research Center at Bell Laboratories, Murray Hill, NJ, and prior to that he obtained his PhD in electrical engineering from Stanford University. His research interests span different aspects of communications, signal processing and control. Among other awards, he is a recipient of the David and Lucille Packard Foundation Fellowship, and the Presidential Early Career Award for Scientists and Engineers (PECASE).

Katsushi Ikeuchi

Professor
Institute of Industrial Science
University of Tokyo, Japan
IEEE Fellow
Slides

Title: e-Heritage Project

Abstract:  Tangible heritage, such as temples and statues, is disappearing day-by-day due to human and natural disaster. In-tangible heritage, such as folk dances, local songs, and dialects, has the same story due to lack of inheritors and mixing cultures. We have been developing methods to preserve such tangible and in-tangible heritage in the digital form. This project, which we refer to as e-Heritage, aims not only record heritage, but also analyze those recorded data for better understanding as well as display those data in new forms for promotion and education.

First half of the talk covers our efforts for handling tangible heritage in Italy, Cambodia, and Japan. We will explain what hardware and software issues have arisen, how to overcome them by designing new sensors using recent computer vision technologies, as well as how to process these data using computer graphics technologies. We will also explain how to use such data for archeological analysis, and review new findings. Finally, we will discuss a new way to display such digital data by using the mixed reality systems, i.e. head-mount displays on site, connected from cloud computers.

The second half of the talk covers how to preserve in-tangible heritage, in particular, preservation of Japanese and Taiwanese folk dances. I will introduce how to display such a Japanese folk dance on a humanoid robot. Here, we follow the paradigm, learning-from-observation, in which a robot learns how to dance from observing human dance. Due to the physical difference between a human and a robot, the robot cannot mimic the entire human actions. Instead, the robot first extracts important actions of a dance, referred to key poses, only exactly mimics those key poses and then interpolates interval trajectories as much as possible but within the limit of the robot capabilities. Later, I will talk about how to apply similar technics to Taiwanese folk dances. Here, I concentrate on the analysis of the key poses and how such key poses relate to their social institutions.

Bio:  Dr. Katsushi Ikeuchi is a professor of the University of Tokyo. He received a Ph.D. degree in Information Engineering from the University of Tokyo in 1978. After working at the AI Lab of MIT for two years, Electrotechnical Lab, Japan for five years, the Robotics Institute of Carnegie Mellon University for ten years, he joined the University of Tokyo in 1996. His research interest spans computer vision, robotics, and computer graphics. He has received several awards, including the IEEE Marr Award, the IEEE RAS “most active distinguished lecturer” award, the IEEE-PAMI Distinguished Researcher Award and the Okawa Prize as well as Shiju Houshou (the Medal of Honor with Purple ribbon) from the Emperor of Japan. He is a fellow of IEEE, IEICE, IPSJ, and RSJ.

Invited Speakers

Zhaojun Bai

Professor
Department of Computer Science
University of California, Davis, USA
Slides

Title: Rayleigh Quotient-type Optimizations and Eigenproblems

Abstract:  Many of modern data analysis techniques lead to optimization problems which involve Rayleigh-Quotient (RQ) and RQ-type functions, such as robust classification to handle uncertainty and constrained normalized cut to incorporate a prior information. While there exist many powerful (non)convex optimization and semi-definite programming algorithms to solve these optimization problems, the underlying linear algebra characteristics is often ignored. That limits their applicability for handling realistic large scale problems. In this talk, we will discuss the ubiquity of RQ and RQ-type optimizations and show that many of these RQ and RQ-type optimizations can be reformulated as linear or nonlinear eigenvalue problems. The use of modern numerical linear algebra solvers for these eigenvalue problems will also be discussed.

Bio:  Zhaojun Bai is a Professor in the Department of Computer Science and Department of Mathematics, University of California, Davis. His research interests are in the areas of scientific computing, linear algebra algorithm design and analysis and mathematical software engineering. He participated a number of large scale synergistic computational science and engineering projects, such as LAPACK for solving the most common problems in numerical linear algebra. He currently serves on multiple editorial boards of journals in his research field, including Journal of Computational Mathematics, ACM Transactions on Mathematical Software and Science China Mathematics. Previously, he served as an Associate Edition of SIAM Journal on Matrix Analysis and Applications, Vice Chair for Algorithm of the 26th IEEE International Parallel and Distributed Processing Symposium and numerous other professional positions.

Mark A. Davenport

Assistant Professor
School of Electrical & Computer Engineering
Georgia Institute of Technology, USA
Slides

Title: Localization Via Paired Comparisons

Abstract:  Suppose that we wish to estimate a low-dimensional vector x from a set of binary pairwise comparisons of the form “x is closer to p than to q” for various choices of vectors p and q. The problem of estimating x from this type of observation arises in a variety of contexts, including nonmetric multidimensional scaling, “unfolding”, and ranking problems, often because it provides a powerful and flexible model of preference. In this talk I will describe both theoretical bounds for how well we can expect to estimate x under a randomized model for p and q, as well as practical algorithms for doing so in a noisy setting. I will also show how we can significantly improve our performance by adaptively changing the distribution for choosing p and q. In doing so, I will also draw enlightening connections between this problem and the literature of 1-bit compressive sensing.

Bio:  Mark A. Davenport is an Assistant Professor with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA. Prior to this, he spent 2010–2012 as an NSF Mathematical Sciences Postdoctoral Research Fellow in the Department of Statistics at Stanford University and as a visitor with the Laboratoire Jacques-Louis Lions at the Université Pierre et Marie Curie. He received the B.S.E.E., M.S., and Ph.D. degrees in electrical and computer engineering in 2004, 2007, and 2010, all from Rice University. His research interests include compressive sensing, low-rank matrix recovery, nonlinear approximation, and the application of low-dimensional signal models in signal processing and machine learning. Prof. Davenport is a recipient of the National Science Foundation CAREER award as well as the Air Force Office of Scientific Research Young Investigator award.

Tony Jebara

Associate Professor
Department of Computer Science
Columbia University, USA
Slides

Title: Bethe Learning of Sparse High Tree-Width Graphical Models

Abstract:  Many machine learning tasks can be formulated in terms of predicting structured outputs. In frameworks such as the structured support vector machine (SVM-Struct) and the structured perceptron, discriminative functions are learned by iteratively applying efficient maximum a posteriori (MAP) decoding. However, maximum likelihood estimation (MLE) of probabilistic models over these same structured space requires computing partition functions, which is generally intractable when the models have large tree-width. We presents a method for learning discrete exponential family models using the Bethe approximation to the MLE. Remarkably, this problem also reduces to tractable iterative (MAP) decoding. This connection emerges by combining the Bethe approximation with a Frank-Wolfe (FW) algorithm on a convex dual objective which circumvents the intractable partition function. The result is a new single loop algorithm MLE-Struct, which is substantially more efficient than previous double-loop methods for approximate maximum likelihood estimation. Our algorithm outperforms existing methods in experiments involving image segmentation, matching problems from vision, and a new dataset of university roommate assignments. We also combine the approach with sparse structure learning to build a graphical model from several years of Bloomberg data. It captures financial and macro-economic variables and their response to news and social media topics.

Bio:  Tony Jebara is Associate Professor of Computer Science at Columbia University. He chairs the Center on Foundations of Data Science as well as directs the Columbia Machine Learning Laboratory. His research intersects computer science and statistics to develop new frameworks for learning from data with applications in social networks, spatio-temporal data, vision and text. Jebara has founded and advised several startups including Sense Networks (acquired by yp.com), Evidation Health, Agolo, Ufora, and Bookt (acquired by RealPage NASDAQ:RP). He has published over 100 peer-reviewed papers in conferences, workshops and journals including NIPS, ICML, UAI, COLT, JMLR, CVPR, ICCV, and AISTAT. He is the author of the book Machine Learning: Discriminative and Generative and co-inventor on multiple patents in vision, learning and spatio-temporal modeling. In 2004, Jebara was the recipient of the Career award from the National Science Foundation. His work was recognized with a best paper award at the 26th International Conference on Machine Learning, a best student paper award at the 20th International Conference on Machine Learning as well as an outstanding contribution award from the Pattern Recognition Society in 2001. Jebara's research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (New York Times, Slash Dot, Wired, Businessweek, IEEE Spectrum, etc.). He obtained his PhD in 2002 from MIT. Esquire magazine named him one of their Best and Brightest of 2008. Jebara has taught machine learning to well over 1000 students (through real physical classes). Jebara was a Program Chair for the 31st International Conference on Machine Learning (ICML) in 2014. Jebara was Action Editor for the Journal of Machine Learning Research from 2009 to 2013, Associate Editor of Machine Learning from 2007 to 2011 and Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence from 2010 to 2012. In 2006, he co-founded the NYAS Machine Learning Symposium and has served on its steering committee since then.

Hang Li

Director
Noah's Ark Lab
Huawei Technologies, China
Slides

Title: AI Research at Huawei Technologies

Abstract:  To enable computers to listen, speak, see, and learn like humans, and even more to build computers that analyze, infer, predict, and make decisions better than humans is the ultimate dream of artificial intelligence. In the near future, we envision that with advanced AI technologies each person and each enterprise will have multiple intelligent computer assistants to help accomplish various tasks with results that are as good or even better than if they were performed by humans. Huawei Technologies, one of the major telecommunication equipment and service companies in the world, is also conducting research toward this noble goal of AI. In this talk, I will introduce some progresses which Huawei, and particularly its Noah’ Ark Lab, have made recently, including a platform for deep learning research, natural language processing using deep learning, construction and utilization of large-scale knowledge base in telecommunication domain.

Bio:  Hang Li is director of the Noah’s Ark Lab of Huawei Technologies. His research areas include natural language processing, information retrieval, statistical machine learning, and data mining. He graduated from Kyoto University in 1988 and earned his PhD from the University of Tokyo in 1998. He worked at the NEC lab in Japan during 1991 and 2001, and Microsoft Research Asia during 2001 and 2012.

Dou Shen

Director
Baidu, China
Slides

Title: Data’s contribution to Search in Baidu

Abstract:  Search Engine is born with big data. With hundreds of billions of indexed Web pages, and user interaction data (including browsing and click) from billions of active searches each day, Baidu has accumulated huge volume of data and the size is growing. In this talk, I will explain how we are using the data to improve user search experience from multiple ways, including query suggestion, related search, knowledge graph, ranking algorithm, user interface, and so on.

Bio:  Dou Shen, is now the Senior Director of Baidu Search Department. Before joining Baidu, he was the Senior Director of CityGrid Media in Seattle, WA, USA, and Co-Founder of an advertising startup Buzzlabs which was acquired by CityGrid. He was also an applied researcher of Microsoft adCenter Redmond, WA, USA. Dou got his PhD from Hong Kong University of Science and Technology (HKUST). At HKUST, Dou led team to win the 2005 ACM KDDCUP on search query categorization. Dou has been the associate General Chair for SIGKDD 2012 , Industry co-Chair of CIKM 2012, founding co-Chair for ADKDD Workshop series at ACM SIGKDD, etc.

Rene Vidal

Associate Professor
Department of Biomedical Engineering
Johns Hopkins University, USA
IEEE Fellow
Slides

Xiaoyang Wang

Dean & Professor
School of Computer Science
Fudan University, China
Slides

Title: Data Management Issues in Big Data Analytics

Abstract:  In contract to traditional data analytics, a distinctive feature of big data analytics is in its desire to include a large variety of data types in one analysis job. This gives rise to several data management problems that have not been adequately dealt with in the past. This talk will summarize what these problems might be and describe a research agenda with technical considerations that aim to solve these problems. In fact, the problems boil down to a data integration one, but need careful study with respect to the volume and velocity properties of the big data as well as its rich semantics that may arise in an ad hoc manner.

Bio:  X. Sean Wang is Professor and Dean at the School of Compute Science, Fudan University, Shanghai, China. He received his PhD degree in Computer Science from the University of Southern California, Los Angeles, California, USA, in 1992. Before joining Fudan University in 2011, he was the Dorothean Chair Professor in Computer Science at the University of Vermont, Burlington, Vermont, USA, and between 2009-2011, he served as a Program Director at the National Science Foundation, USA, in the Division of Information and Intelligent Systems. He has published widely in the general area of databases and information security, and was a recipient of the US National Science Foundation Research Initiation and CAREER awards. His research interests include database systems, information security, data mining, and sensor data processing. In the research community, he served as the general chair of ICDE 2011 held in Washington DC, the general chair of CIKM 2014 in Shanghai, and in other roles at international conferences and journals, including PC co-chair of MDM 2013, WISE 2012, PC member of past SIGMOD, CIKM, ICDE and many other conferences, current IEEE Technical Steering Committee member of ICDE, associate editor of Geoinformatica and WWW journal, and past associate editor of TKDE and KAIS.

John Wright

Assistant Professor
Department of Electrical Engineering
Columbia University, USA
Slides

Title: Complete Dictionary Recovery over the Sphere

Abstract:  We consider the problem of recovering a complete (i.e., square and invertible) matrix A0, from Y ∈ Rn×p with Y = A0X0, provided X0 is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers A0 when X0 has O (n) nonzeros per column, under suitable probability model for X0. In contrast, prior results based on efficient algorithms provide recovery guarantees when X0 has only O (√n) nonzeros per column.

Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no spurious local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one global minimizer with an arbitrary initialization, despite the presence of saddle points. We describe applications of these results in signal processing and microscopy, and recent extensions to convolutional models and tensor decomposition problems.

Bio:  John Wright is an Assistant Professor in the Electrical Engineering Department at Columbia University. He received his PhD in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2009, and was with Microsoft Research from 2009-2011. His research is in the area of high-dimensional data analysis. In particular, his recent research has focused on developing algorithms for robustly recovering structured signal representations from incomplete and corrupted observations, and applying them to practical problems in imaging and vision. His work has received a number of awards and honors, including the 2009 Lemelson-Illinois Prize for Innovation for his work on face recognition, the 2009 UIUC Martin Award for Excellence in Graduate Research, and a 2008-2010 Microsoft Research Fellowship, and the Best Paper Award from the Conference on Learning Theory (COLT) in 2012.

Eric Xing

Professor
School of Computer Science
Carnegie Mellon University, USA
Slides

Shuicheng Yan

Associate Professor
Department of Electrical and Computer Engineering
National University of Singapore, Singapore
Slides

Title: Block-diagonal Matrix Pursuit: Theories and Applications

Abstract:  For an affinity matrix, ideally it is expected to be block-diagonal such that the data cluster/class information can be explicitly identified. In this talk, we report the recent progress of our research in block-diagonal matrix pursuit. More specifically, this talk contains three parts:

1.In theory, we show that if the class subspaces are independent, then under certain conditions, the representation matrices by many previous methods are block diagonal.

2.A non-convex subspace clustering method with the hard block diagonal constraint shall be introduced.

3.A convex subspace clustering method with the soft block diagonal constraint shall be presented.

Bio:  Dr. Yan Shuicheng is currently an Associate Professor at the Department of Electrical and Computer Engineering at National University of Singapore, and the founding lead of the Learning and Vision Research Group (http://www.lv-nus.org). Dr. Yan's research areas include machine learning, computer vision and multimedia, and he has authored/co-authored hundreds of technical papers over a wide range of research topics, with Google Scholar citation >17,000 times and H-index 56. He is ISI Highly-cited Researcher, 2014 and IAPR Fellow 2014. He has been serving as an associate editor of IEEE TKDE, TCSVT and ACM Transactions on Intelligent Systems and Technology (ACM TIST). He received the Best Paper Awards from ACM MM'13 (Best Paper and Best Student Paper), ACM MM’12 (Best Demo), PCM'11, ACM MM’10, ICME’10 and ICIMCS'09, the runner-up prize of ILSVRC'13, the winner prize of ILSVRC’14 detection task, the winner prizes of the classification task in PASCAL VOC 2010-2012, the winner prize of the segmentation task in PASCAL VOC 2012, the honourable mention prize of the detection task in PASCAL VOC'10, 2010 TCSVT Best Associate Editor (BAE) Award, 2010 Young Faculty Research Award, 2011 Singapore Young Scientist Award, and 2012 NUS Young Researcher Award.

Wotao Yin

Professor
Department of Mathematics
University of California, Los Angeles, USA
Slides

Title: Asynchronous parallel fixed-point algorithms and its applications

Abstract:  We propose ARock, an asynchronous parallel algorithmic framework for finding a fixed point to a nonexpansive operator. In the framework, a set of agents (machines, processors, or cores) updates a sequence of randomly selected coordinates of the unknown variable in a parallel asynchronous fashion. As special cases of ARock, novel algorithms in linear algebra, convex optimization, machine learning, distributed and decentralized optimization are introduced. We show that if the nonexpansive operator has a fixed point, then with probability one the sequence of points generated by ARock converges to a fixed point. Stronger convergence properties such as linear convergence are obtained under stronger conditions. Very encouraging numerical performance of ARock is observed on solving linear equations, sparse logistic regression, and other large-scale problems.

This is joint work with Zhimin Peng, Yangyang Xu, and Ming Yan.

Bio:  Wotao Yin is a professor in the Department of Mathematics of UCLA. His research interests lie in computational optimization and its applications in image processing, machine learning, and other inverse problems. He received his B.S. in mathematics from Nanjing University in 2001, and then M.S. and Ph.D. in operations research from Columbia University in 2003 and 2006, respectively. During 2006 - 2013, he was with Rice University. He won NSF CAREER award in 2008 and Alfred P. Sloan Research Fellowship in 2009. His recent work has been in optimization algorithms for large-scale and distributed signal processing and machine learning problems.

Jingyi Yu

Professor
Department of Computer and Information Sciences
University of Delaware, USA

Title: Big Data in Computational Imaging

Abstract:  With the availability of commodity computational cameras such as Lytro, Raytrix, and Pelican mobile cameras, it has become increasingly common to acquire large quantities of visual data (e.g., light fields) in place of a single image of the scene. In this talk, I discuss several unique benefits of harvesting such new types of "big data". Specifically, I will demonstrate 1) how to study and emulate the human visual system from the data to conduct robust saliency detection and fast 3D reconstruction, and 2) how to exploit dense spatial and angular sampling to improve scene reconstruction on challenging scenarios such as heavy occlusions, reflections, and refractions.

Bio:  Jingyi Yu is a Professor in the Department of Computer and Information Sciences and the Department of Electrical and Computer Engineering at the University of Delaware. He received B.S. from Caltech in 2000 and Ph.D. from MIT in 2005. His research interests span a range of topics in computer vision and computer graphics, especially on computational photography and imaging. He has published over 90 papers at highly refereed conferences and journals including over 40 papers at the premiere conferences CVPR/ICCV/ECCV. Dr. Yu's research has been generously supported by the National Science Foundation (NSF), the National Institute of Health (NIH), the Army Research Office (ARO), and the Air Force Office of Scientific Research (AFOSR). He is a recipient of the NSF CAREER Award and the AFOSR YIP Award. He has served as an area chair of ICCV '11 and '15 and ACCV '14, and he is currently an Associate Editor of IEEE TPAMI, Springer TVCJ and Springer MVA.

Zheng Zhang

Professor
Department of Computer Science
New York University Shanghai, China
Slides

Title: Deep Learning: Platform, Algorithms and Applications

Abstract:  TBA

Bio:  TBA

Zhi-Hua Zhou

Professor
Department of Computer Science & Technology
Nanjing University, China
ACM Distinguished Scientist, IEEE Fellow, IAPR Fellow, CCF Fellow
Slides

Title: From AdaBoost to LDM

Abstract:  TBA

Bio:  TBA

Jun Zhu

Associate Professor
Department of Computer Science
Tsinghua University, China
Slides

Title: Adaptive Dropout Training for SVMs

Abstract:  Dropout and other feature noising schemes have shown promising results in controlling over-fitting by artificially corrupting the training data. Though extensive studies have been performed for generalized linear models, little work has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. In this talk, I will present dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re-weighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we consider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected non-smooth hinge loss and the nonlinearity of latent representations. Finally, we propose a scheme to adaptively update the dropout rates, which avoid the hyper-parameter tuning. Our techniques can be used to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training.

Bio:  Dr. Jun Zhu is an associate professor at the Department of Computer Science and Technology in Tsinghua University. He received his Ph.D. in Computer Science from Tsinghua in 2009. Before joining Tsinghua in 2011, he did post-doctoral research at the Machine Learning Department in Carnegie Mellon University. His research involves the theory and algorithms for latent variable models, Bayesian nonparametrics, large-margin learning, and the application of machine learning in social network analysis, and data mining.

Prof. Zhu has published over 60 peer-reviewed papers in the prestigious conferences and journals, including ICML, NIPS, KDD, JMLR, PAMI, etc. He is an associate editor for IEEE Trans. on PAMI. He served as area chair/senior PC for ICML (2014, 2015), IJCAI (2013, 2015), UAI (2014, 2015), and NIPS (2013, 2015). He was a local co-chair of ICML 2014. He is a recipient of the CCF Distinguished PhD Thesis Award (2009), Microsoft Fellowship (2007), IEEE Intelligent Systems "AI's 10 to Watch" Award (2013), NSFC Excellent Young Scholar Award (2013), and CCF Young Scientist Award (2013). His work is supported by the "221 Basic Research Plan for Young Talents" at Tsinghua.