INTRODUCTION

360° camera is a core building block of the Virtual Reality (VR) and Augmented Reality (AR) technology that bridges the real and digital world. It allows us to build the virtual environments for VR/AR applications from the real world easily by capturing the entire visual world surrounding the camera simultaneously. With the rapid growth of VR/AR technology, the availability and popularity of 360° camera are also growing faster than ever. Many camera manufacturers introduce new 360° camera models, both professional and consumer-level, in the past few years. At the same time, content sharing sites like YouTube and Facebook enable their support for 360° images and videos, and content creators such as the news and movie industry start to exploit and deliver the new media. People now create, share, and watch 360° content in our everyday life just like any other media, and the amount of 360° content is increasing rapidly. Despite the popularity of 360° content, the new media remains relatively unexplored from various aspects. The difference between 360° images and traditional images introduces many new challenges and opportunities, and the research community has just started to explore them.

We believe that a workshop for research centering around 360° content can greatly boost the research in the field and that this is the right time for the workshop. The rapidly growing number of 360° content incurs an unprecedented need of technologies to handle the new media, yet we still don’t have a satisfactory solution to present, process or even encode this new format. Researchers from various communities including computer vision, HCI, multimedia, computer graphic and machine learning are working are working on or interested in 360° related topic together. This will provide a forum to discuss the current progress in the field and fost in this field with overlapping directions independently. A major goal of this workshop will be to bring researchers thater collaboration. It will also provide a good introduction for researchers that are interested and want to start their research in the field.

SUBMISSION

For the poster session, we invite submissions of maximum 4 pages extended abstract including reference describing relevant work that is unpublished, recently published, or presented in the main conference which allows participants to share research ideas related to 360° vision. The abstract should follow the ECCV format (c.f. main conference authors guidelines). Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. These submissions will be reviewed single-blindly by our program committee. The accepted extended abstract are invited to present in the poster session of the workshop by one of the authors.

All the papers should be submitted using CMT website: https://cmt3.research.microsoft.com/360PI2018/.

The topic should be related to 360° content, including but not limited to:

  • User attention / saliency prediction in 360° video
  • Improving 360° video display
  • 360° video stabilization
  • 360° video summarization
  • Learning visual recognition model in 360° content (e.g., object detection, semantic segmentation, etc.)
  • Learning CNN for spherical data
  • Visual features for 360° imagery
  • Depth and surface normal prediction using 360° images
  • Indoor localization / mapping using 360° camera
  • Robot navigation using 360° camera
  • Telepresence using 360° camera
  • Smart TV system for 360° videos
  • Video editing tool for 360° videos
  • Projection model for 360° imagery
  • 360° specific video compression
  • 360° video streaming
  • 3D reconstruction using 360° camera
  • 360° camera model
  • Novel applications for 360° imagery
  • 360° image/video/audio dataset

IMPORTANT DATES

Paper submission deadline: July 27th, 2018 (CMT website)
Notification to Authors: August 6th, 2018
Workshop date: September 9th, 2018 (afternoon)

WORKSHOP PROGRAM

The final schedule will be announced here.
September 9th Half day, TBC Munich, Germany Place - TBC
Time: Description:
13:20 pm - 13:30 pm Opening remark
13:30 pm - 14:00 pm Invited speaker 1
14:00 pm - 14:30 pm Invited speaker 2
14:30 am - 15:00 am Coffee break
15:00 am - 15:30 am Invited speaker 3
15:30 am - 16:00 am Invited speaker 4
16:00 am - 17:00 am Posters
17:00 am - 17:30 am Invited speaker 5
17:30 am - 17:40 pm Closing remark

INVITED SPEAKERS

Aaron Hertzmann is a Principal Scientist at Adobe Research. He is an ACM Distinguished Scientist and IEEE Senior Member and holds courtesy faculty appointments at University of Washington and University of Toronto. His research interest spans over computer graphics and computer vision. Also, his recent works in virtual reality user interfaces study in-headset VR video editing and review. His personal webpage is at: http://www.dgp.toronto.edu/~hertzman/index.html
Hideki Koike is a Professor at Tokyo Institute of Technology. His research spans over human-computer interaction and computer security and reliability. Also, his recent works span over human interaction in omnidirectional spherical display from the aspect of view stabilization, tracking and projection applications. His webpage is at: https://sites.google.com/site/koike/Home/Profile
Shannon Chen is a Research Scientist on the 360 Media team at Facebook. He is a contributor to the open-sourced Transform360 project on GitHub and the inventor of gravitational predictor (G-predictor) and pyramid projection in dynamic streaming. He is now contributing to dynamic streaming for Oculus Video and 360 Facebook videos. His personal webpage is at: https://research.fb.com/people/chen-shannon/
Steve Seitz is a Professor at University of Washington. His research focuses on computer vision and computer graphics. He was twice awarded the Marr Prize and has received an NSF Career Award, an ONR Young Investigator Award, and an Alfred P.,He is also a Director at Google and has led the development of Google Jump Camera and other VR projects. His webpage is at: https://homes.cs.washington.edu/~seitz/
Marc Pollefeys is a Full Professor and Head of the Institute for Visual Computing of the Dept. of Computer Science of ETH Zurich. He is known for his work in 3D computer vision, robotics, graphics and machine learning problems. He has been the first to develop a software pipeline to automatically turn photographs into 3D models. His personal webpage is at: https://www.inf.ethz.ch/personal/marc.pollefeys/index.html

ORGANIZING COMMITTEE

CONTACT US

Point of Contact:

Yu Chuan Su

University of Texas at Austin

Email: ycsu@cs.utexas.edu