INTRODUCTION

360° camera is a core building block of the Virtual Reality (VR) and Augmented Reality (AR) technology that bridges the real and digital world. It allows us to build the virtual environments for VR/AR applications from the real world easily by capturing the entire visual world surrounding the camera simultaneously. With the rapid growth of VR/AR technology, the availability and popularity of 360° camera are also growing faster than ever. Many camera manufacturers introduce new 360° camera models, both professional and consumer-level, in the past few years. At the same time, content sharing sites like YouTube and Facebook enable their support for 360° images and videos, and content creators such as the news and movie industry start to exploit and deliver the new media. People now create, share, and watch 360° content in our everyday life just like any other media, and the amount of 360° content is increasing rapidly. Despite the popularity of 360° content, the new media remains relatively unexplored from various aspects. The difference between 360° images and traditional images introduces many new challenges and opportunities, and the research community has just started to explore them.

We believe that a workshop for research centering around 360° content can greatly boost the research in the field and that this is the right time for the workshop. The rapidly growing number of 360° content incurs an unprecedented need of technologies to handle the new media, yet we still don’t have a satisfactory solution to present, process or even encode this new format. A major goal of this workshop will be to bring together researchers that are working on or interested in 360° related topics. We invite researchers from communities including computer vision, HCI, multimedia, computer graphics, and machine learning to join the workshop. This will provide a forum to discuss current progress in the field and foster collaboration. It will also provide a good introduction for researchers that are interested and want to start their research in the field.

SUBMISSION

NOTE: The physical dimensions of the poster stands are 1950mm (width) x 950mm (height). Please refer to the ICCV presentation instructions.

NOTE: The oral and spotlight lists are updated. We are pleased to have you presenting in 360PI Workshop!

For the poster session, we invite submissions of maximum 4 pages extended abstract including reference describing relevant work that is unpublished, recently published, or presented in the main conference which allows participants to share research ideas related to 360° vision. The abstract should follow the ICCV format (c.f. main conference authors guidelines). Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. These submissions will be reviewed double-blindly by invited reviewers. Among the submissions, 2 will be selected as oral presentation by the program committee. The accepted extended abstract are invited to present in the poster session of the workshop by one of the authors. The physical dimensions of the poster stands that will be available this year are 1950mm (width) x 950mm (height). Please prepare your posters accordingly.

All the papers should be submitted using CMT website: https://cmt3.research.microsoft.com/360PI2019/.

The topic should be related to 360° content, including but not limited to:

  • User attention / saliency prediction in 360° video
  • Improving 360° video display
  • 360° video stabilization
  • 360° video summarization
  • Learning visual recognition model in 360° content (e.g., object detection, semantic segmentation, etc.)
  • Learning CNN for spherical data
  • Visual features for 360° imagery
  • Depth and surface normal prediction using 360° images
  • Indoor localization / mapping using 360° camera
  • Robot navigation using 360° camera
  • Telepresence using 360° camera
  • Smart TV system for 360° videos
  • Video editing tool for 360° videos
  • Projection model for 360° imagery
  • 360° specific video compression
  • 360° video streaming
  • 3D reconstruction using 360° camera
  • 360° camera model
  • Novel applications for 360° imagery
  • 360° image/video/audio dataset

IMPORTANT DATES

Paper submission deadline: August 2nd, 2019
Notification to Authors: August 23th, 2019
Camera-ready Deadline: August 30th, 2019
Workshop date: October 27th, 2019 (morning)

WORKSHOP PROGRAM

The 360 PI workshop is in Room 317A of COEX conference center. See you there on the 27th. October !

October 27th Half day, AM Seoul, Korea - COEX Convention Center 317A
Time: Description:
08:30 am - 08:40 am Opening remark
08:40 am - 09:10 am Invited Talk Speaker: Gunhee Kim
Title: A Memory Network Approach for Temporal Summarization of 360° Videos
09:10 am - 09:40 am Spotlight Session Spotlight Lists 4 minutes per team
09:40 am - 10:00 am Coffee break
10:00 am - 10:30 am Invited Talk Speaker: Hanbyul Joo
Title: Measuring and Modeling Nonverbal Communication in the Panoptic Studio
10:30 am - 11:00 am Invited Talk Speaker: Josechu Guerrero
Title: 3D from Single Omni View: Lines and Layouts
11:00 am - 11:30 am Oral Session Oral Lists15 minutes per team
11:30 am - 11:40 am Prize Award
11:40 am - 12:10 pm Posters

ORAL LISTS

Title: Authors:
Orientation-aware Semantic Segmentation on Icosahedron Spheres Chao Zhang (Toshiba Research Europe Ltd); Stephan Liwicki (Toshiba Research Europe Ltd); William Smith (University of York); Roberto Cipolla (University of Cambridge)
Where to Look Next: Unsupervised Active Visual Exploration on 360° Input Soroush Seifi (KU Leuven); Tinne Tuytelaars (K.U. Leuven)

SPOTLIGHT LISTS

Title: Authors:
3D Orientation Estimation from Single Panoramas Yongjie Shi (Peking University); Xin Tong (Peking University); Jingsi Wen (Peking University); He Zhao (Peking University); Xianghua Ying (Peking University)
360SD-Net: 360°Stereo Depth Estimation with Learnable Cost Volume Ning-Hsu Wang (National Tsing Hua University); Bolivar E. Solarte (National Tsing Hua University); Yi-Hsuan Tsai (NEC Labs America); Wei-Chen Chiu (National Chiao Tung University); Min Sun (NTHU);
Predicting 360-degree Visual Attention using Crowd-sourced Data Ching-Hui Chen (Google); Raviteja Vemulapalli (Google); Yukun Zhu (Google Inc.); Aseem Agarwala (Google)
FisheyeMODNet: Moving Object detection on Surround-view Cameras for Autonomous Driving Marie Yahiaoui (Valeo); Hazem Rashed (Valeo); Letizia Mariotti (Valeo); Ganesh Sistu (Valeo Vision Systems); Ian Clancy (Valeo Vision Systems); Lucie Yahiaoui (Valeo); Varun Ravi Kumar (Valeo); Senthil Yogamani (Valeo Vision Systems)
A Dataset for Objects Detection in 360° Indoor Equirectangular Images Shih-Han Chou (National Tsing Hua University); Cheng Sun (National Tsing Hua University); Wen-Yen Chang (National Tsing Hua University); Wan-Ting Hsu (National Tsing Hua University); Min Sun (NTHU); Jianlong Fu (Microsoft Research)
BiFuse: Monocular 360°Depth Estimation via Bi-projection Fusion Fu-En Wang (National Tsing Hua University); Yu-Hsuan Yeh (National Tsing Hua University); Wei-Chen Chiu (National Chiao Tung University); Yi-Hsuan Tsai (NEC Labs America); Min Sun (NTHU)

INVITED SPEAKERS

Gunhee Kim is a Professor at Seoul National University. His research interests are solving computer vision and web mining problems that emerge from big image and video data shared online, by developing scalable and effective machine learning and optimization techniques. In particular, his recent works study story-based summarization for 360° videos. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award. His webpage is at: http://vision.snu.ac.kr/~gunhee/
Hanbyul Joo is a Research Scientist at Facebook AI Research (FAIR). His research focuses on measuring social signals in interpersonal social communication to computationally model social behavior. He developed a unique sensing system at CMU, the Panoptic Studio, which is designed to capture social interaction using more than 500 synchronized cameras. His research has been covered in various media outlets including Discovery, Reuters, IEEE Spectrum, NBC News, Voice of America, The Verge, and WIRED. He is a recipient of the Samsung Scholarship and CVPR Best Student Paper Award in 2018. His webpage is at: https://jhugestar.github.io/
Josechu Guerrero is a Full Professor at Universidad de Zaragoza. He is the Deputy Director of the Department of Computer Science and Systems Engineering. His research interests are in the area of computer vision, particularly in 3D visual perception, photogrammetry, visual control, omnidirectional vision, robotics, and vision-based navigation. His recent research focuses on indoor room layout analysis and reconstruction on 360 images. His webpage is at: http://webdiis.unizar.es/~jguerrer/

ORGANIZING COMMITTEES

CONTACT US

Point of Contact:

Hou-Ning Hu

National Tsing Hua University

Email: eborboihuc@gmail.com