Web-Books
im Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Seite - 173 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 173 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Bild der Seite - 173 -

Bild der Seite - 173 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text der Seite - 173 -

a) b) c) Figure3: Increasing thesizeofa texturepatch(a)by scaling (b)or repetition (c). scaled uniformly but repeated. Knowledge of the used materials is needed to simulate this behavior. Elements likebuttonsorpocketsalsodonotscale,or onlyundercertainconstraints (e.g., seamsorzippers scale in one direction). Prints on a garment usually also scale independently from the pattern of the fab- ric. Thebehaviorusuallycannotbedescribedbyaset of global rules. Therefore, the proposed system pro- vides a way to adjust the scaling behavior for each element independently. 3.1. Input The method takes a 3D garment model created through photogrammetry and a size chart as its in- put. The garment model consists of a mesh and a mapped texture. A parametric body model consist- ing of a pose and a shape description is registered to the 3D garment model. The measurements of the gradingtableareassociatedwiththeparametricbody model in the formofedgepaths. 3.2.SemanticRegionSegmentation First, the garment mesh and its texture are input to a machine learning algorithm which assigns a se- mantic meaning to each texel (e.g. collar, seam, button, etc.). Moreover, the same algorithm labels background and mannequin texels for removal. The map’s semantic meaning can be transferred to the mesh’s facesandvertices through texturemapping. 3.3.Sizing theMesh The grading table describes how different ele- ments like sleeves, collars, legs, etc. scale between the sizes. Each measurement is associated with an edgepath in theparametricbodymodel. Thesepaths are projected onto the garment’s mesh. The actual scaling transformation is performed through Lapla- cian Mesh Processing [3]. Parts which should not scale obtain high regularization weights. The edge pathlengthsactas thedata termsof thedesiredtrans- formation. SeeFigure2. a) b) c) Figure4: Texturedecompositionof(a) into illumina- tion (b)andmaterial (c). 3.4.Sizing theTexture Simply scaling a garment’s mesh and texture based on the grading table and the parametric body model isnotenoughbecausefabricsarenotstretched but rather more of the fabric is used (Figure 3). This is achieved by repeating the texture instead of scaling. The pattern repetition is aligned with the sewing/cutting lines of the garment, which are de- rived from the parametric body like measurement paths. Finally, thetextureneedstobepreprocessedto separate the material’s diffuse color from large scale lighting effects, such as wrinkles which should not be repeated. Figure4shows thedecomposition. 4.Conclusion We have shown a method to generate additional sizes of a garment from a single scanned size and grading tables. The method helps retailers and man- ufacturers to efficiently capture their entire product range, e.g. for virtual fashion try-on. Moreover, this work demonstrates how to overcome a major limi- tation of photogrammetry: the ability to create 3D modelsof itemswhicharenotavailableforscanning. References [1] R. Brouet, A. Sheffer, L. Boissieux, and M.-P. Cani. Design preserving garment transfer. ACM Trans. Graph., 31(4), July2012. [2] L. Liu, Z. Su, X. Fu, L. Liu, R. Wang, and X. Luo. A data-driven editing framework for auto- matic 3d garment modeling. Multimedia Tools Appl., 76(10):12597–12626,May2017. [3] O. Sorkine. Laplacian Mesh Processing. In Y. Chrysanthou and M. Magnor, editors, Eurograph- ics 2005 - State of the Art Reports. The Eurographics Association, 2005. [4] Y. Xu, S. Yang, W. Sun, L. Tan, K. Li, and H. Zhou. Virtual garment using joint landmark prediction and part segmentation. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pages 1247–1248,2019. 173
zurĂĽck zum  Buch Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Titel
Joint Austrian Computer Vision and Robotics Workshop 2020
Herausgeber
Graz University of Technology
Ort
Graz
Datum
2020
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-752-6
Abmessungen
21.0 x 29.7 cm
Seiten
188
Kategorien
Informatik
Technik
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020