Conceptual design is the stage at the beginning of the design process where designers identify the initial ideas to develop. They refer to existing designs and materials for inspiration, while striving to create something new and not get stuck in the past. It’s one of the highlights of the design process, where creativity flourishes, but it also can create a bottleneck for designers faced with the challenge of balancing the familiar with the unfamiliar.
That’s where Voho Seo, a senior researcher at Kia Global Design, started thinking about whether AI could help solve this problem for automotive industrial designers. “Creating the initial idea is the most painful part for designers,” he says, “and I thought it would be great to have AI help us generate a lot of images quickly.”
Kia Global Design has been researching related technologies since 2018, but the AI tools available on the market at the time did not yield satisfactory results. In 2020, Kia Global Design approached Autodesk to collaborate on a customized tool. After years of preparation, the group, led by Seo, worked with Autodesk on a collaborative research project called “Bridge Inspiration and Design.” From September 2022 to August 2023, researchers developed a prototype tool that incorporates generative AI into the concept design workflow.
The tool is purposely built in a way that reflects the actual workflow of a designer as much as possible. Before coming up with a new design, industrial designers typically choose a keyword that will become the concept for the design, then look for reference material, such as external images, that will match the keyword. This inspiration is part of the conceptual design process, which involves sketching a lot of images and quickly vetting ideas. The generative AI tool takes care of the “sketching tons of images” part of the process.
The way the tool works is simple. First, designers select the design keywords they want to emulate—for example, “bold,” “dynamic,” “stylish,” “simple,” “sporty,” or “cutting-edge.” Then, they upload their initial sketch to the tool and hit the “create design” button. The tool will generate a number of similar images based on the selected keywords and the uploaded sketch.
From there, designers can continue to fine-tune the images within the tool. In addition to the initial sketch, they can add concept images for inspiration and specify which parts of the image to reference and to what extent. Designers can also adjust parameters like the number of symmetries in the generated images. If a designer likes one of the generated images, they can have the tool create new image iterations based on it. This feature allows designers to actively interact with the tool and tweak the images to take them in the direction they want.
“The significance is that designers can quickly generate images that capture the visual characteristics they want to reflect and use them as a reference to create a final product,” says Seo. “This process is very similar to how designers work in the real world.” In fact, even as a prototype, the tool has been well-received by designers, who say it’s easy to use and helps them be more productive.
Developing an effective tool was a close collaboration between Kia Global Design, led by Seo, and Autodesk, which handled the technical aspects. As the designers and researchers on the project came from different backgrounds, their first priority was to better understand each other and the designers’ workflow.
“We had weekly meetings with about 10 researchers at Autodesk Research Industry Futures led by Senior Principal Research Scientist Ye Wang to explain how the designers approach and solve problems and what their work process is,” says Seo. “We were asked many specific questions and explained the basic elements of design in detail to make it easier for researchers to understand.”
To get a better understanding of how different designers work, the Autodesk Research team also held in-depth interviews with designers from Kia Global Design, each lasting about an hour. All of this data was used to develop the tool.
“A designer’s job is very diverse,” says Seo. “Sometimes you have to sketch a lot quickly and try different things, and sometimes you have to be meticulous and precise. Because there is no right answer in design, determining the best option is very important. The ultimate goal is to improve the quality of decision-making by reducing the time it takes to physically produce an image.”
AI still has its limitations. Currently, there are only a few parts of an industrial designer’s job that AI can assist in, such as early conceptual design. Seo emphasizes that it’s important to understand the limits and roles of AI—to challenge the idea that AI is useless for design, as well as the converse assumption that AI can do everything. “There are bottlenecks in the design process that designers struggle with,” said Seo, “and while AI can’t do everything, it can help us be more productive by intervening in the middle of the process and unlocking them.”
A prime example is converting 2D images into 3D models, a process in which automation and AI could save significant amounts of time. The project undertaken by Kia Global Design and Autodesk Research originally included technology to automatically convert AI-generated 2D images into 3D models, but they decided to exclude it because the converted 3D models were not ready for practical use.
However, Seo believes that “with future investment and technological development, this technology could be feasible.” The basis for this outlook is the rapid development of AI technology. Challenges that were once considered limitations of AI are quickly being overcome as new methodologies emerge.
For example, Autodesk AI Lab recently unveiled Project Bernini, a technology that can create 3D models from 2D images, text, voxels, point clouds, and more. While still in the experimental stage, Autodesk is partnering with several companies to bring this technology to market.
As more large-scale technology and well-funded foundation models become available, the barrier to entry for generative AI is getting lower. “It used to be that you had to have a lot of data to train on before you could utilize AI, but now you can skip a lot of the effort of building from scratch because the foundation is already in place,” says Seo.
“We’ve used Autodesk’s BlankAI in a workshop before, and even though we didn’t train it with any data about Kia Motors, it generated images similar to a Kia car when we typed in the car’s model name as a prompt,” he says. “In this project, we didn’t train our own data but used the open-source Versatile Diffusion model, and the results were practical enough. The ability to quickly adopt various AI tools and utilize them well at the right time will be crucial for designers.”
Kijun Lee is a freelance journalist and translator. He has worked as a journalist for JoongAng Ilbo and Forbes Korea. He is interested in international affairs, cutting-edge technology, and community relations. He is currently the editor of Design & Make with Autodesk in Korean.
PD&M
PD&M
Image courtesy of Final Aim.
PD&M