第三届部分纹理3D 扫描挑战(SHARP)工作坊与挑战赛

活动组织召集人

Artec 3D赞助的获奖者将获得价值8000欧元的现金奖励。

本次工作坊包含论文征稿与竞赛两种参赛方式:

论文竞赛鼓励参赛者提交数据驱动下形状与纹理处理的新发现。下方列出了论文征集的选题范围。  

为部分或嘈杂数据恢复完整高清3D网格为本次竞赛的主题,共包括两项挑战与三组数据集: 

  • 在挑战一中,您需要利用部分数据恢复带纹理的三维扫描。共2个赛道: 
    • 赛道1: 利用部分数据恢复带纹理的人体扫描。本赛道使用的数据集为3DBodyTex.v2,包含2500份带纹理的三维扫描。这是原始数据集3DBodyTex.v1的扩展板,原始版最早发布于2018国际3D视觉会议。
    • 赛道2: 利用部分数据恢复带纹理的物体扫描,即借助3DObjTex.v1 数据集恢复普通物体的扫描,该数据集来自在线扫描资源库ViewShape。该数据集包含2000多个常见物体,纹理和几何复杂度各异。
  • 在挑战二中,您需要利用嘈杂稀疏、仅有平滑边缘的扫描数据,恢复带有锋利边缘的精密物体细节。CC3D-PSE是CC3D新版数据集,于2020年IEEE图像处理国际会议(ICIP)上推出,本次将作为挑战二的数据使用。该数据集包含5万对CAD模型以及相应的三维扫描。每组扫描与CAD模型都有锋利边缘的参数注解。参赛人员需要根据给定的平滑边缘三维扫描,将相应的CAD模型重建为三角网格,使锋利边缘无限接近真实边缘。该挑战同样包含两个赛道: 
    • 赛道1 恢复线性锋利边缘。本赛道使用CC3D-PSE数据子集,其中仅包含线性锐边。
    • 赛道2:  将锋利边缘恢复为线性、圆形、样条组块。本赛道将使用全部CC3D-PSE数据集。

本届为第三届SHARP,前两届分别与CVPR 2021ECCV 2020(欧洲计算机视觉国际会议 )一同成功举办。

赞助商

欢迎参与(挑战赛)

挑战
恢复部分纹理扫描

在该项挑战赛中,参赛者需要利用部分三维扫描,精准恢复完整三维纹理网格。共包含两个赛道:

赛道1:人体扫描
赛道2:物体扫描

挑战
恢复锋利边缘

给定一个边缘光滑的三维物体扫描,将相应的CAD模型重建为三角网格,使锋利边缘无限接近真实边缘。

赛道1:锋利线条
赛道2:锋利边缘(圆形、样条、线条)

论文征集(论文投稿赛道)

  • 纹理三维数据的表征与评估
  • 纹理三维扫描特征提取
  • 纹理三维扫描的生成模型
  • 基于学习的三维重建
  • 纹理与形状联合匹配
  • 纹理与形状联合补全
  • 语义三维数据重建
  • 三维与二维数据的有效融合
  • 纹理三维数据精化
  • 三维特征边缘检测与精化
  • 三维数据的高层次表征
  • 基于非结构化三维数据的CAD建模

所有接收论文会收录在CVPR 2022论文集中。论文将经过同行评审,必须遵守CVPR 2022论文格式与要求。关于投稿论文的具体格式要求,可在投稿页面查看。

时间安排

活动组织召集人

恢复部分纹理扫描

挑战1

赛道1

利用部分数据恢复带纹理的人体扫描。本赛道使用的数据集为3DBodyTex.v2,包含2500份带纹理的三维扫描。这是原始数据集3DBodyTex.v1的扩展板,原始版最早发布于2018国际3D视觉会议。

赛道2

利用部分数据恢复带纹理的物体扫描,即借助3DObjTex.v1 数据集恢复普通物体的扫描,该数据集来自在线扫描资源库ViewShape。该数据集包含2000多个常见物体,纹理和几何复杂度各异。
  • 任何自定义程序都需要在作品中进行描述并执行。
  • 对作品进行质量检查,控制缺陷水平。
  • 部分扫描由人工合成。
  • 出于隐私,所有网格必须对脸部形状和纹理进行模糊处理,与3DBodyTex数据类似。
  • 脸部和手部不参与评估,因为这部分原始扫描的形状不太可靠。

新变化:除了前两届(SHARP 2020和SHARP 2021)提供的部分数据生成程序外,本次还将提供更多部分真实数据的生成程序。赛道1和赛道2的部分扫描示例见下图。

恢复锋利边缘

挑战2

新变化:本项挑战推出了CC3D-PSE数据集,是SHARP 2021中使用的CC3D数据集新版本。CC3D-PSE包括:

  • 5万对扫描与三角网格CAD模型
  • 锋利边缘注解以参数曲线形式提供,包括线性、圆形、样条组块

赛道1

恢复线性锋利边缘。本赛道使用CC3D-PSE数据子集,其中仅包含线性锐。

赛道2

将锋利边缘恢复为线性、圆形、样条组块。本赛道将使用全部CC3D-PSE数据集。

提交您的挑战赛解决方案

即将推出

Programme

SHARP will be held on 19 June 2022.
The workshop will follow a hybrid format.

b4-table-white b4-table-first-col b4-table-specifications b4-table-header-colspan
Essential inspection
Opening13:30 – 13:35
Presentation of SHARP Challenges13:35 – 13:50
Plenary Talk – Prof. Angela Dai13:50 – 14:40
Coffee Break14:40 – 14:55
Finalists 1: Points2ISTF – Implicit Shape and Texture Field from Partial Point Clouds – Jianchuan Chen14:55 – 15:15
Finalists 2: 3D Textured Shape Recovery with Learned Geometric Priors – Lei Li15:15 – 15:35
Finalists 3: Parametric Sharp Edges from 3D Scans – Anis Kacem15:15 – 15:35
Plenary Talk – Prof. Tolga Birdal15:55 – 16:45
Announcement of Results16:45 – 16:55
Analysis of Results16:55 – 17:10
Panel Discussion17:10 – 17:30
Closing Remarks17:30 – 17:35

Prof. Angela Dai

Technical University of Munich

Prof. Angela Dai

Prof. Tolga Birdal

Imperial College London

Bio: Angela Dai is an Assistant Professor at the Technical University of Munich where she leads the 3D AI group. Prof. Dai’s research focuses on understanding how the 3D world around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a Eurographics Young Researcher Award, ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship.

Bio: Tolga Birdal is an assistant professor in the Department of Computing of Imperial College London. Previously, he was a senior Postdoctoral Research Fellow at Stanford University within the Geometric Computing Group of Prof. Leonidas Guibas. Tolga has defended his masters and Ph.D. theses at the Computer Vision Group under Chair for Computer Aided Medical Procedures, Technical University of Munich led by Prof. Nassir Navab. He was also a Doktorand at Siemens AG under supervision of Dr. Slobodan Ilic working on “Geometric Methods for 3D Reconstruction from Large Point Clouds”. His current foci of interest involve geometric machine learning and 3D computer vision. More theoretical work is aimed at investigating and interrogating limits in geometric computing and non-Euclidean inference as well as principles of deep learning. Tolga has several publications at the well-respected venues such as NeurIPS, CVPR, ICCV, ECCV, T-PAMI, ICRA, IROS, ICASSP and 3DV. Aside from his academic life, Tolga has co-founded multiple companies including Befunky, a widely used web-based image editing platform.

Towards Commodity 3D Content Creation

With the increasing availability of high quality imaging and even depth imaging now available as commodity sensors, comes the potential to democratize 3D content creation. State-of-the-art reconstruction results from commodity RGB and RGB-D sensors have achieved impressive tracking, but reconstructions remain far from usable in practical applications such as mixed reality or content creation, since they do not match the high quality of artist-modeled 3D graphics content: models remain incomplete, unsegmented, and with low-quality texturing. In this talk, we will address these challenges: I will present a self-supervised approach to learn effective geometric priors from limited real-world 3D data, then discuss object-level understanding from a single image, followed by realistic 3D texturing from real-world image observations. This will help to enable a closer step towards commodity 3D content creation.

Talks Rigid & Non-Rigid Multi-Way Point Cloud Matching via Late Fusion

Correspondences fuel a variety of applications from texture-transfer to structure from motion. However, simultaneous registration or alignment of multiple, rigid, articulated or non-rigid partial point clouds is a notoriously difficult challenge in 3D computer vision. With the advances in 3D sensing, solving this problem becomes even more crucial than ever as the observations for visual perception hardly ever come as a single image or scan. In this talk, I will present an unfinished quest in pursuit of generalizable, robust, scalable and flexible methods, designed to solve this problem. The talk is composed of two sections diving into (i) MultiBodySync, specialized in multi-body & articulated generalizable 3D motion segmentation as well as estimation, and (ii) SyNoRim, aiming at jointly matching multiple non-rigid shapes by relating learned functions defined on the point clouds. Both of these methods utilize a family of recently matured graph optimization techniques called synchronization as differentiable modules to ensure multi-scan / multi-view consistency in the late stages of deep architectures. Our methods can work on a diverse set of datasets and are general in the sense that they can solve a larger class of problems than the existing methods.