Garment sewing patterns are the design language behind clothing, yet their current vector-based digital representations weren’t built with machine learning in mind. Vector-based representation encodes a sewing pattern as a discrete set of panels, each defined as a sequence of lines and curves, stitching information between panels and the placement of each panel around a body.
However, this representation causes two major challenges for neural networks: discontinuity in latent space between patterns with different topologies and limited generalization to garments with unseen topologies in the training data. In this work, we introduce GarmentImage, a unified raster-based sewing pattern representation. %that addresses these challenges. GarmentImage encodes a garment sewing pattern’s geometry, topology and placement into multi-channel regular grids.
Machine learning models trained on GarmentImage achieve seamless transitions between patterns with different topologies and show better generalization capabilities compared to models trained on vector-based representation. We demonstrate the effectiveness of GarmentImage across three applications: pattern exploration in latent space, text-based pattern editing, and image-to-pattern prediction. The results show that GarmentImage achieves superior performance on these applications using only simple convolutional networks.
GarmentImage integrates the discrete collection of 2D panels,
the connectivity among panels, and the placement of panels
into multi-channel 2D grids. Each grid cell contains an
inside/outside flag indicating occupancy, four edges
with associated edge types embedding stitching
information, and a local deformation matrix capturing
the panel geometry. The placement of a panel around a body is
implicitly represented by the location of its associated cells
on the 2D grid.
A GarmentImage is automatically encoded from a sewing pattern in vector format and can be decoded back to the vector forma
We demonstrate that a VAE model trained on GarmentImage has smoother latent space than a VAE model trained on vector-based representation. This allows users to explore the latent space of garment sewing patterns more easily.
When inferring the pattern from an input, such as image, we can directly predict the GarmentImage as a whole and then procedurally reconstruct the discrete panel sets, effectively addressing the challenge of discrete panel selection. Additionally, garments with similar visual looks have similar GarmentImage representations regardless of topology. These properties improve the generalization ability of machine learning models trained on GarmentImage, leading to better pattern prediction performance on garments with unseen topologies.
@inproceedings{tatsukawa2025garmentimage,
title={GarmentImage: Raster Encoding of Garment Sewing Patterns with Diverse Topologies},
author={Tatsukawa, Yuki and Qi, Anran and Shen, I-Chao and Igarashi, Takeo},
booktitle={ACM SIGGRAPH 2025 Conference Proceedings},
year={2025},
}