Categories: NewsProductivity

Revolutionizing Engineering Challenges with AI: A New Approach to Optimization

Complex design challenges in engineering often come down to navigating a high number of variables and limited opportunities for testing. The stakes are high, especially when we’re talking about fine-tuning a power grid or developing safer vehicles. Each evaluation can be costly and the range of potential variables mind-boggling. Just think about car safety design – thousands of elements are in play and the tiniest decision can dramatically alter how a vehicle behaves during a collision. Unfortunately, traditional optimization tools often falter under the weight of this complexity.

The Gamechanger: MIT’s Fresh Take

That’s where a team of researchers from MIT steps in. They’ve come up with a fresh approach that reshapes how we use Bayesian optimization to address problems with hundreds of variables. In tests on engineering benchmarks, including power-system optimization, their method found top solutions 10 to 100 times speedier than traditional techniques.

So, what’s the secret? It’s all about a foundation model trained on tabular data. This model autonomously identifies the most impactful variables to improve performance and iteratively refines the solution. Being trained on a broad range of data, foundation models have the impressive ability to adapt to various applications.

Efficiency Boost

The crux of their tabular foundation model is that it doesn’t need constant retraining, which significantly ramp up the process’s efficiency. For more intricate problems, this method brings even higher speed, making it invaluable in sectors like materials development and drug discovery. To quote the leading author of the project, Rosen Yu, a graduate student in computational science and engineering, “Modern AI and machine learning models can change the way engineers and scientists create complex systems. We conceived one algorithm that not only solves high-dimensional problems but is also reusable, sidestepping the need to start everything from scratch”.[source]

Without forgetting that dealing with multifaceted problems and hefty evaluation methods, scientists usually use Bayesian optimization. This method iteratively finds the best setup by building a surrogate model to guide the search. Though, retraining this model after each iteration becomes difficult, especially when a large solution space is involved. The team tackled this challenge by having a generative AI system, a tabular foundation model, act as the surrogate model within the Bayesian optimization algorithm.

A New Age in Optimization

The use of a tabular foundation model is likened by Yu to a ChatGPT for spreadsheets, because its input and output are tabular data, which is more typically seen and used than language in the engineering sector. Like large language models such as ChatGPT, Claude, and Gemini, it has been pre-trained on substantial amounts of tabular data, which makes it skilled at handling various prediction problems. One of its key assets is that it can be used without retraining.

The researchers have refined the tabular foundation model to focus on the design space features that most influence the solution. This results in greater precision and efficiency, permitting the model to select the most critical features to concentrate on. For example, a car could have 300 design criteria, but not all of them drive the best design. The algorithm quickly gets to the core of the most influential features, thus saving time from less impactful ones.

The team had to clear a few hurdles, such as finding the best tabular foundation model for the task. Also, they had to work out how to connect it with a Bayesian optimization algorithm to identify key design features. Once they established the framework, their method consistently outperformed five current state-of-the-art optimization algorithms, finding the best solutions 10 to 100 times quicker. However, it didn’t outdo all benchmarks, possibly due to gaps in the model’s training data.

Don’t worry, the MIT team isn’t resting on their laurels. They’re looking to enhance their tabular foundation models even further and apply their method to even more complex issues, such as naval ship design. As Ahmed, another member of the team, puts it: “At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical.”

One academic not involved in the research, Professor Wei Chen, praises the MIT’s approach as a “creative and promising way to reduce the heavy data requirements of simulation-based design. Overall, this work is a powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings.”

If this still hasn’t quenched your thirst for all things AI and engineering, you can read more on the topic straight from the source at the MIT News website. Exciting times are ahead in the engineering world, and we’re here to keep you informed!

Max Krawiec

Share
Published by
Max Krawiec

This website uses cookies.