By ZHOU Fangying
Despite technological limitations and concerns over job elimination, fashion designers are exploring generative AI.
When models walked down the runway wearing Shuting Qiu’s latest collection during Shanghai Fashion Week this spring, there was one important difference this time. The clothes were designed by AI, partly at least.
Designer Shuting worked with AI startup Tiamat and its MorpherVLM. To create the patterns, Morpher learned from Shuting's past designs and produced similar patterns for her to choose from and improve on.
Almost as good as a human
To learn any given artist’s style, MorpherVLM needs between 50 and 100 pieces of previous work. The training process takes about two weeks.
So far Morpher's role is limited to suggesting patterns and cuts. It is not yet capable of taking charge of the entire process from brainstorming to finishing.
Generative AI works with fashion designers in two main ways, says Xu Muhan, head of products at Tiamat. It brainstorms the “big picture,” such as the color tone, the cut, and the patterns, after which the designer tweaks the details. In other cases, the algorithm modifies a draft to the designer’s specifications. The designer can, for example, try long sleeves instead of short sleeves. And AI will draw faster than any human assistant.
Besides Tiamat, Fabrie and Style3D have attracted a lot of media attention and venture capital. Fabrie allows designers to make edits using text prompts, example photos, and even doodles - “use less orange,” “make this part less noisy,” "puffier outline." It's just like talking to, or at least sending text messages to, a human colleague.
Understanding the past, preparing for the future
WANG Miaozi, the owner of Poppy Wang, says AI is an intrinsic part of her process. Her team brainstorms with the image-generating app Midjourney, sketches and edits with Fabrie, and before ordering factory samples, tests what the final product would look like using Style3D.
LIU Chen, founder of Style3D, says brands spend a lot of time and money ordering and reordering factory samples. With digital samples, the process can be 30 percent more efficient.
The criticism that pictures generated by Midjourney are “too AI” – not surprising given that AI can only learn from its training corpus, and that users tend to choose the same set of prompts even without realizing it themselves – is proof that humans will still be in charge in the foreseeable future.
Mature designers and large companies are actively exploring how to use AI more creatively. As CHEN Dabo, founder of Fabrie, puts it, training a model is not dissimilar to hiring a designer who understands the past of a brand and has new ideas for its future
No one enjoys labeling
Training technically takes two weeks, but in practice, it takes up to a year to integrate AI into a label’s creative process. The team may not bother to tweak and fine-tune a model given the hectic pace of fashion. It doesn’t help that training entails tedious data labeling, which no one enjoys.
XU Muhan, of Tiamat, says his team is trying to improve Morpher. Chen Dabo says Fabrie is also racing to iterate. The more users there are, the richer the training data.
The challenge is how to integrate user feedback while keeping up with the evolution of the technology. Behold, it’s a brave and beautiful new world out there.