Skip to main content Skip to secondary navigation

Gradient 𝚫 Spaces Research Group

Department of Civil and Environmental Engineering

Stanford University

Main content start

About Us

Welcome to the Gradient 𝚫 Spaces Research Group. The group belongs to the Civil and Environmental Engineering Department, Stanford University, under the Schools of Engineering and Sustainability. Our research and educational activities focus on developing quantitative and data-driven methods that learn from real-world visual data to generate, predict, and simulate new or renewed built environments that place the human in the center. Our mission is to create sustainable, inclusive, and adaptive built environments that can support our current and future physical and digital needs. Of particular interest is the creation of spaces that blend from the 100% physical (real reality) to the 100% digital (virtual reality) and anything in between, with the use of mixed reality and multi-level design (i.e., of buildings, processes, UXs, etc.). We believe that by cross-pollinating the two domains, we can achieve higher immersion and view these spaces as a step toward more equitable living conditions. Hence, we aim for developing methods that work in real-world settings on a global scale. To achieve the above, we are building a cross- and inter- disciplinary team that is diverse and well-rounded. Most importantly, we are driven by curiosity and learning, and so does everything we do.

To learn more about this vision, you can read this short story to illustrate this future and the impact on designers: 
A Day in the Life of an Architect in the Gradient World

Research Highlights

Volumetric Semantically Consistent 3D Panoptic Mapping
Miao Yang, Iro Armeni, Marc Pollefeys, Daniel Barath
arXiv preprint
[pdf]

Q-REG: End-to-End Trainable Point Cloud Registration with Surface Curvature
Shengze Jin, Daniel Barath, Marc Pollefeys, Iro Armeni
3DV 2024
[pdf]

SGAligner: 3D Scene Alignment with Scene Graphs
Sayan Deb Sarkar, Ondrej Miksik, Marc Pollefeys, Daniel Barath, Iro Armeni
ICCV 2023 Conference
[pdf]   [website]   [code]   [benchmark]