Abstract
Optical flow estimation in omnidirectional videos faces two significant issues: the lack of benchmark datasets and the challenge of adapting perspective video-based methods to accommodate the omnidirectional nature. This paper proposes the first perceptually natural-synthetic omnidirectional benchmark dataset with a 360\(^\circ \) field of view, FLOW360, with 40 different videos and 4,000 video frames. We conduct comprehensive characteristic analysis and comparisons between our dataset and existing optical flow datasets, which manifest perceptual realism, uniqueness, and diversity. To accommodate the omnidirectional nature, we present a novel Siamese representation Learning framework for Omnidirectional Flow (SLOF). We train our network in a contrastive manner with a hybrid loss function that combines contrastive loss and optical flow loss. Extensive experiments verify the proposed framework’s effectiveness and show up to 40% performance improvement over the state-of-the-art approaches. Our FLOW360 dataset and code are available at https://siamlof.github.io/.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adobe: Mixamo. www.mixamo.com/
Ahmadi, A., Patras, I.: Unsupervised convolutional neural networks for motion estimation. In: ICIP (2016)
Artizzu, C.O., Zhang, H., Allibert, G., Demonceaux, C.: OmniFlowNet: a perspective neural network adaptation for optical flow estimation in omnidirectional images. In: ICPR (2021)
Azevedo, R., Birkbeck, N., Simone, F., Janatra, I., Adsumilli, B., Frossard, P.: Visual distortions in 360-degree videos. TCSVT 2019(8), 2524–2537 (2020)
Bailer, C., Taetz, B., Stricker, D.: Flow fields: dense correspondence fields for highly accurate large displacement optical flow estimation. In: ICCV (2015)
Baker, S., Roth, S., Scharstein, D., Black, M.J., Lewis, J., Szeliski, R.: A database and evaluation methodology for optical flow. In: ICCV (2007)
Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. IJCV 12(1), 43–77 (1994)
Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-convolutional Siamese networks for object tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 850–865. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48881-3_56
Bhandari, K., Zong, Z., Yan, Y.: Revisiting optical flow estimation in 360 videos. In: ICPR (2021)
Blender: https://www.blender.org/
Boomsma, W., Frellsen, J.: Spherical convolutions and their application in molecular modelling. In: NeurIPS (2017)
Bromley, J., et al.: Signature verification using a “Siamese” time delay neural network. IJPRAI 7(04), 669–688 (1993)
Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24673-2_3
Brox, T., Malik, J.: Large displacement optical flow: descriptor matching in variational motion estimation. TPAMI 33(3), 500–513 (2010)
Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_44
Chen, Q., Koltun, V.: Full flow: optical flow estimation by global optimization over regular grids. In: CVPR (2016)
Chen, X., He, K.: Exploring simple Siamese representation learning. In: CVPR (2021)
Cohen, T.S., Geiger, M., Koehler, J., Welling, M.: Spherical CNNs. arXiv (2018)
Coors, B., Condurache, A.P., Geiger, A.: SphereNet: learning spherical representations for detection and classification in omnidirectional images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 525–541. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_32
Demonceaux, C., Kachi-Akkouche, D.: Optical flow estimation in omnidirectional images using wavelet approach. In: CVPRW (2003)
Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: ICCV (2015)
Eder, M., Shvets, M., Lim, J., Frahm, J.M.: Tangent images for mitigating spherical distortion. In: CVPR (2020)
Esteves, C., Allen-Blanchette, C., Makadia, A., Daniilidis, K.: Learning SO(3) equivariant representations with spherical CNNs. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 54–70. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_4
Feng, B.Y., Yao, W., Liu, Z., Varshney, A.: Deep depth estimation on 360\(^{\circ }\) images with a double quaternion loss. In: 3DV (2020)
Fernandez-Labrador, C., Facil, J.M., Perez-Yus, A., Demonceaux, C., Civera, J., Guerrero, J.J.: Corners for layout: end-to-end layout recovery from 360 images. RA-L 5(2), 1255–1262 (2020)
Field, D.J.: Relations between the statistics of natural images and the response properties of cortical cells. Josa a 4(12), 2379–2394 (1987)
Garg, R., Roussos, A., Agapito, L.: A variational approach to video registration with subspace constraints. IJCV 104(3), 286–314 (2013)
Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the kitti dataset. IJRR 32(11), 1231–1237 (2013)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The kitti vision benchmark suite. In: CVPR (2012)
Geyer, C., Daniilidis, K.: A unifying theory for central panoramic systems and practical implications. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 445–461. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45053-X_29
Goralczyk, A.: Nishita sky demo (2020), creative Commons CC0 (Public Domain) - Blender Studio - cloud.blender.org
Horn, B.K., Schunck, B.G.: Determining optical flow. AI 17(1–3), 185–203 (1981)
Horn, B., Schunck, B.: Techniques and applications of image understanding (1981)
Hui, T.W., Tang, X., Loy, C.C.: LiteFlowNet: a lightweight convolutional neural network for optical flow estimation. In: CVPR (2018)
Hui, T.W., Tang, X., Loy, C.C.: A lightweight optical flow CNN -revisiting data fidelity and regularization. TPAMI 43(8), 2555–2569 (2021)
Hulle, S.V.: Bcon19 (2019), 2019 Blender Conference - cloud.blender.org
Hur, J., Roth, S.: MirrorFlow: exploiting symmetries in joint optical flow and occlusion estimation. In: ICCV (2017)
Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: CVPR (2017)
Yu, J.J., Harley, A.W., Derpanis, K.G.: Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 3–10. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_1
Jiang, S., Campbell, D., Lu, Y., Li, H., Hartley, R.: Learning to estimate hidden motions with global motion aggregation. arXiv (2021)
Liu, C., Freeman, W.T., Adelson, E.H., Weiss, Y.: Human-assisted motion annotation. In: CVPR (2008)
Liu, P., Lyu, M., King, I., Xu, J.: SelFlow: self-supervised learning of optical flow. In: CVPR (2019)
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, vol. 2 (1981)
McCane, B., Novins, K., Crannitch, D., Galvin, B.: On benchmarking optical flow. CVIU 84(1) (2001)
Meister, S., Hur, J., Roth, S.: Unflow: unsupervised learning of optical flow with a bidirectional census loss. In: AAAI (2018)
Meister, S., Jähne, B., Kondermann, D.: Outdoor stereo camera system for the generation of real-world benchmark data sets. Opt. Eng. 51(2), 021107 (2012)
Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: CVPR (2015)
Menze, M., Heipke, C., Geiger, A.: Discrete optimization for optical flow. In: GCPR (2015)
Otte, M., Nagel, H.-H.: Optical flow estimation: advances and comparisons. In: Eklundh, J.-O. (ed.) ECCV 1994. LNCS, vol. 800, pp. 49–60. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-57956-7_5
Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: CVPR (2017)
Seidel, R., Apitzsch, A., Hirtz, G.: OmniFlow: human omnidirectional optical flow. In: CVPR (2021)
Shakernia, O., Vidal, R., Sastry, S.: Omnidirectional egomotion estimation from back-projection flow. In: CVPRW (2003)
Simoncelli, E.P., Olshausen, B.A.: Natural image statistics and neural representation. Annu. Rev. Neurosci. 24(1), 1193–1216 (2001)
Sketchfab. https://sketchfab.com/
Steinbrücker, F., Pock, T., Cremers, D.: Large displacement optical flow computation without warping. In: ICCV (2009)
Su, Y.C., Grauman, K.: Learning spherical convolution for fast features from 360\(^{\circ }\) imagery. In: NeurIPS (2017)
Su, Y.C., Grauman, K.: Kernel transformer networks for compact spherical convolution. In: CVPR (2019)
Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Models matter, so does training: an empirical study of CNNs for optical flow estimation. TPAMI 42(6), 1408–1423 (2019)
Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: CVPR (2014)
Teed, Z., Deng, J.: RAFT: recurrent all-pairs field transforms for optical flow. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 402–419. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_24
Teney, D., Hebert, M.: Learning to extract motion from videos in convolutional neural networks. In: ACCV (2016)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Deep end2end voxel2voxel prediction. In: CVPRW (2016)
Turbosquid: https://www.turbosquid.com
Wang, R., Geraghty, D., Matzen, K., Szeliski, R., Frahm, J.M.: VPLNet: deep single view normal estimation with vanishing points and lines. In: CVPR (2020)
Weinzaepfel, P., Revaud, J., Harchaoui, Z., Schmid, C.: DeepFlow: large displacement optical flow with deep matching. In: ICCV (2013)
Woliński, M.: City - 3d model, sketchfab.com
Wulff, J., Black, M.J.: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: CVPR (2015)
Zhang, Z., Xu, Y., Yu, J., Gao, S.: Saliency detection in 360\(^\circ \) videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 504–520. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_30
Zhao, S., Sheng, Y., Dong, Y., Chang, E.I., Xu, Y., et al.: MaskFlowNet: asymmetric feature matching with learnable occlusion mask. In: CVPR (2020)
Zioulis, N., Karakottas, A., Zarpalas, D., Daras, P.: OmniDepth: dense depth estimation for indoors spherical panoramas. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 453–471. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_28
Acknowledgements
This research was partially supported by NSF CNS-1908658, NeTS-2109982 and the gift donation from Cisco. This article solely reflects the opinions and conclusions of its authors and not the funding agents.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bhandari, K., Duan, B., Liu, G., Latapie, H., Zong, Z., Yan, Y. (2022). Learning Omnidirectional Flow in 360\(^\circ \) Video via Siamese Representation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13668. Springer, Cham. https://doi.org/10.1007/978-3-031-20074-8_32
Download citation
DOI: https://doi.org/10.1007/978-3-031-20074-8_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20073-1
Online ISBN: 978-3-031-20074-8
eBook Packages: Computer ScienceComputer Science (R0)