Jan 22, 2025

AI Product Configurator for the Fashion Industry

Building a world-class AI product configurator for the fashion industry that scaled to thousands of customers.

AI Product Configurator for the Fashion Industry

3D renders for product configurators are quickly becoming obsolete with the onset of generative image models like Stable Diffusion and Flux. Our client wanted the highest quality product configurator on the market for their customers, so we built them exactly that.

BTW! If you're a business looking for help getting tangible value out of (or need help building) production AI systems, my co-founder and I could love to talk. Feel free to reach out: https://cal.com/exafloplabs/30min

Customized shoe in different scenes

The Challenge

  1. Complex Customization Requirements
    The configurator had to integrate with thousands of custom materials like alligator, raffia, oxford mesh, denim, etc. Not only did we need to accurately apply these materials to the product, but we also needed to allow users to apply custom colors to each of those materials. With BILLIONS of potential combinations, we had to use a custom generative AI solution.
  2. High-Fidelity Visuals, Faster
    While 3D rendering can produce an output that is good enough, it still looks like a 3d render. 3D rendering also takes days to weeks for a human to create, so with billions of potential images, that was out of the question. We needed to build a pipeline that allowed users to customize their product and get a photorealistic result in under a minute.
  3. Scalability and Performance
    Generating one good output is a cute project, but our client needed to scale to hundreds of thousands of users. We needed to build a GPU infrastructure that could accommodate spikes in traffic and handle the requests without exceeding our client’s monthly budget.
AI Product Configurator

Our Approach

  1. Leveraging Generative Models
    In terms of the product customization itself, we spent 3 weeks working on a ComfyUI workflow that produced consistent, high fidelity results. Users select segments of a 3d model of the product on the frontend (along with the material/color they picked for each segment), and we send the masks of those segments along with normal, depth, and UV maps to our ComfyUI workflow. Given these inputs, we generate the entire shoe in each material using Stable Diffusion, “cut” out the segment where each material should be applied, and then composite it all together using a pass through Flux. In order to get high-quality materials, we even trained LoRAs on each material to ensure maximum fidelity in the final result.
  2. Workflow Orchestration
    Once we had our custom ComfyUI workflow, we converted it into an API that we could call when a user needed to generate an output. We started by running serverless GPUs for inference, but decided to move to dedicated servers for scale. Moving to dedicated servers decreased the generation time (because there’s no cold start), by almost 1 minute per generation, but parallelizing our workflow further reduced that, resulting in a final generation time of under 30 seconds.
  3. Iterative Prototyping & Feedback Loops
    Throughout the development phase, we collaborated closely with our client to ensure we updated them on a weekly basis through demos, and throughout the week in case they wanted us to make changes on the fly for their own demos to internal and external stakeholders and customers.
ComfyUI Virtual Try-On Workflow

The Solution

  1. Dynamic Product Visualization
    We built two use cases: one for end-consumers, and another one for internal designers at large retailers. For the consumer use-case, we built a system that allowed merchandisers to pre-select the POVs, materials, and even colors of their product that they’d like users to choose from when customizing their product. This pre-generation step allowed us to significantly reduce inference time at scale.

For the business use-case (internal designers who need to visualize their concepts for pitching to internal stakeholders), they need more customizability and are ok with longer processing times, we build a system that allowed them to choose any POV, color, and material (from a library of thousands) of their product.

In both use cases, we gave users the ability to generate their own background for the final result.

  1. Edge Case Handling & Brand Consistency
    By training the models on our client’s specific brand guidelines, we ensured all outputs remained consistent with their visual identity. Whether the user experiments with unconventional color schemes or tries out unique add-ons, the generated images always respect brand standards and product feasibility.

By training LoRAs on our client’s specific library of materials, we ensured that all generations adhered to the brand’s visual identity. Whether their users experiment with unconventional color schemes or try unique additions, our resulting images always respect brand standards and product feasibility.

AI Product Configurator

The Result

  1. Increased Customer Engagement
    Users of high-end products expect high-quality customization options. 3D renders get you close, but why not show your customers the best of the best outputs?
  2. Reduced Time-to-Purchase
    Because the output generations were so realistic and fast, customers were more confident about their choice. Early results show a 20% decrease in shopping cart abandonment.
  3. Operational Efficiency Gains
    By replacing cumbersome 3D rendering workflows with AI-generated product customizations, our client saved days to weeks that they would normally spend each time they needed a new 3d render.

  4. Scalability & Future-Proofing
    Our custom pipeline easily scales to hundreds of thousands of users, as well as abnormal spikes in traffic due to product launches. Furthermore, it can handle brand new product lines easily - just upload a 3d model of your product, and the whole pipeline works as expected!
Shoe on model

Look Ahead

Our AI product configurator has opened our client’s eyes to the possibilities of leveraging generative AI in their business. We’re now experimenting with reducing their design prototyping cycle time by generating photorealistic images of garments from their designers’ sketches. Doing so allows them to quickly throw out bad ideas and spend more time on higher-conviction projects that they can spend the time to create in real life before sending it to a manufacturer.

For us, this project demonstrated how generative image models are going to reshape the way prototyping is done. We’re thrilled to continue refining our solution and working closely with our clients to leverage the possibilities with this tech.

Sketch to image

Need our Help?

We've seen companies waste months and millions failing to build & productionize AI systems to drive tangible business outcomes. We help you think through complex AI projects using the latest tools and technology—and then we build & productionize them for you or alongside your team. Let's talk: https://cal.com/exafloplabs/30min

Helping you save time and money.