CVPR 2019 Tutorial onLearning Representations via Graph-structured Networks |
||
Long Beach, CA, USA |
Recent years have seen a dramatic rise in the adoption of convolutional neural networks (ConvNets) for a myriad of computer vision tasks. The structure of convolution is proved to be powerful in numerous tasks to capture correlations and abstract conceptions out of image pixels. However, ConvNets are also shown to be deficient in modeling quite a few properties when computer vision works towards more difficult AI tasks. These properties include pairwise relation, global context and the ability to process irregular data beyond spatial grids.
An effective direction is to reorganize the data to be processed with graphs according to the task at hand, while constructing network modules that relate and propagate information across the visual elements within the graphs. We call these networks with such propagation modules as graph-structured networks. In this tutorial, we will introduce a series of effective graph-structured networks, including non-local neural networks, spatial propagation networks, sparse high-dimensional CNNs and scene graph networks. We will also discuss related open challenges still existing in many vision problems.
8:30 - 8:45 . Opening.
8:45 - 9:25 . Learnable Spatial Propagation Networks - Sifei Liu [slides]
9:25 - 10:05 . Learning Graph Representations for Video Understanding - Xiaolong Wang [slides]
10:05-10:30 . Break.
10:30 - 11:10 . Scene Graph Generation and Its Application to Vision and Language Tasks - Jianwei Yang [slides]
11:10 - 11:50 . Sparse High-Dimensional and Content-Adaptive Convolutions - Hang Su [slides]
Please contact Xiaolong Wang if you have question. The webpage template is by the courtesy of awesome Georgia.