Efficient and scalable depthmap fusion

Enliang Zheng, Enrique Dunn, Rahul Raguram, Jan Michael Frahm

Research output: Contribution to conferencePaper

  • 6 Citations

Abstract

The estimation of a complete 3D model from a set of depthmaps is a data intensive task aimed at mitigating measurement noise in the input data by leveraging the inherent redundancy in overlapping multi-view observations. In this paper we propose an efficient depthmap fusion approach that reduces the memory complexity associated with volumetric scene representations. By virtue of reducing the memory footprint we are able to process an increased reconstruction volume with greater spatial resolution. Our approach also improves upon state of the art fusion techniques by approaching the problem in an incremental online setting instead of batch mode processing. In this way, are able to handle an arbitrary number of input images at high pixel resolution and facilitate a streaming 3D processing pipeline. Experiments demonstrate the effectiveness of our proposal both at 3D modeling from internet-scale crowd source data as well as close-range 3D modeling from high resolution video streams.

Other

Other2012 23rd British Machine Vision Conference, BMVC 2012
CountryUnited Kingdom
CityGuildford, Surrey
Period9/3/129/7/12

Fingerprint

Fusion reactions
Data storage equipment
Processing
Redundancy
Pipelines
Pixels
Internet
Experiments

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition

Cite this

Zheng, E., Dunn, E., Raguram, R., & Frahm, J. M. (2012). Efficient and scalable depthmap fusion. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, United Kingdom. https://doi.org/10.5244/C.26.34
Zheng, Enliang ; Dunn, Enrique ; Raguram, Rahul ; Frahm, Jan Michael. / Efficient and scalable depthmap fusion. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, United Kingdom.
@conference{1b090043f4bb4955988bedc8be911999,
title = "Efficient and scalable depthmap fusion",
abstract = "The estimation of a complete 3D model from a set of depthmaps is a data intensive task aimed at mitigating measurement noise in the input data by leveraging the inherent redundancy in overlapping multi-view observations. In this paper we propose an efficient depthmap fusion approach that reduces the memory complexity associated with volumetric scene representations. By virtue of reducing the memory footprint we are able to process an increased reconstruction volume with greater spatial resolution. Our approach also improves upon state of the art fusion techniques by approaching the problem in an incremental online setting instead of batch mode processing. In this way, are able to handle an arbitrary number of input images at high pixel resolution and facilitate a streaming 3D processing pipeline. Experiments demonstrate the effectiveness of our proposal both at 3D modeling from internet-scale crowd source data as well as close-range 3D modeling from high resolution video streams.",
author = "Enliang Zheng and Enrique Dunn and Rahul Raguram and Frahm, {Jan Michael}",
year = "2012",
month = "1",
day = "1",
doi = "https://doi.org/10.5244/C.26.34",
language = "English (US)",
note = "2012 23rd British Machine Vision Conference, BMVC 2012 ; Conference date: 03-09-2012 Through 07-09-2012",

}

Zheng, E, Dunn, E, Raguram, R & Frahm, JM 2012, 'Efficient and scalable depthmap fusion' Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, United Kingdom, 9/3/12 - 9/7/12, . https://doi.org/10.5244/C.26.34

Efficient and scalable depthmap fusion. / Zheng, Enliang; Dunn, Enrique; Raguram, Rahul; Frahm, Jan Michael.

2012. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, United Kingdom.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Efficient and scalable depthmap fusion

AU - Zheng, Enliang

AU - Dunn, Enrique

AU - Raguram, Rahul

AU - Frahm, Jan Michael

PY - 2012/1/1

Y1 - 2012/1/1

N2 - The estimation of a complete 3D model from a set of depthmaps is a data intensive task aimed at mitigating measurement noise in the input data by leveraging the inherent redundancy in overlapping multi-view observations. In this paper we propose an efficient depthmap fusion approach that reduces the memory complexity associated with volumetric scene representations. By virtue of reducing the memory footprint we are able to process an increased reconstruction volume with greater spatial resolution. Our approach also improves upon state of the art fusion techniques by approaching the problem in an incremental online setting instead of batch mode processing. In this way, are able to handle an arbitrary number of input images at high pixel resolution and facilitate a streaming 3D processing pipeline. Experiments demonstrate the effectiveness of our proposal both at 3D modeling from internet-scale crowd source data as well as close-range 3D modeling from high resolution video streams.

AB - The estimation of a complete 3D model from a set of depthmaps is a data intensive task aimed at mitigating measurement noise in the input data by leveraging the inherent redundancy in overlapping multi-view observations. In this paper we propose an efficient depthmap fusion approach that reduces the memory complexity associated with volumetric scene representations. By virtue of reducing the memory footprint we are able to process an increased reconstruction volume with greater spatial resolution. Our approach also improves upon state of the art fusion techniques by approaching the problem in an incremental online setting instead of batch mode processing. In this way, are able to handle an arbitrary number of input images at high pixel resolution and facilitate a streaming 3D processing pipeline. Experiments demonstrate the effectiveness of our proposal both at 3D modeling from internet-scale crowd source data as well as close-range 3D modeling from high resolution video streams.

UR - http://www.scopus.com/inward/record.url?scp=84898429627&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84898429627&partnerID=8YFLogxK

U2 - https://doi.org/10.5244/C.26.34

DO - https://doi.org/10.5244/C.26.34

M3 - Paper

ER -

Zheng E, Dunn E, Raguram R, Frahm JM. Efficient and scalable depthmap fusion. 2012. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, United Kingdom. https://doi.org/10.5244/C.26.34