GB300 Coming soon. Learn more

January, 2026

Team Spotlight: Dr. Assel Sakanova, Senior Thermal Engineer

Read insights from her latest research and how fundamental thermodynamic principles are being reimagined.

With years of international research and teaching experience, Dr. Assel Sakanova brings her expertise in thermal management, fluid dynamics, and heat transfer to the challenges of next-generation infrastructure. From aerospace systems to AI Factories, her work focuses on solving complex thermal problems under extreme constraints.

1. With over a decade of researching thermal management across data centres, aerospace, and advanced cooling systems, what has driven your work or interest?

Being able to solve thermal management challenges under increasingly constrained operating conditions. Although the application areas differ, they are all governed by the same fundamental principles of heat transfer, thermodynamics, and fluid dynamics, which have allowed my work to transition naturally across domains.

Throughout my research and professional experience, I have focused on improving and optimising system and component-level thermal efficiency, particularly in environments where heat flux and power density continue to rise.

As modern technologies push more power into smaller footprints, thermal management has evolved from a supporting function into a primary system-level constraint.
2. What are some of the most significant insights from your latest research that would be relevant to today’s large-scale AI workloads?

A key insight from my recent work is that conventional air cooling alone is no longer sufficient to meet the thermal demands of next-generation GPUs. As power density continues to increase, advanced cooling approaches. Liquid, two-phase, or hybrid cooling architectures are required to maintain performance, reliability, and energy efficiency.

3. Your work includes verifying multi-pass cold plates and analysing a range of coolants. In the context of AI Factory design, where thermal loads are orders of magnitude higher than traditional data centres, what do you see as the most promising direction for next-generation cold-plate or immersion cooling technologies?

For immersion, the most scalable direction is two-phase immersion or hybrid immersion-to-CDU architectures, provided the system can manage fluid stability, materials compatibility, and operating control.

Two-phase immersion offers strong potential because it uses latent heat to handle sharp thermal transients while maintaining relatively uniform component temperatures. In practice, the success of immersion will depend less on the boiling physics alone and more on long-term reliability—dielectric fluid compatibility, filtration, contamination control, serviceability, and safe operating envelopes.

4. You demonstrated the potential of Multi-Objective Genetic Algorithms (MOGA) for enhancing thermal reliability. How do you see optimisation frameworks like MOGA being applied at the scale of an entire AI Factory’s thermal architecture?

Optimisation frameworks such as MOGA become significantly more powerful when applied at the AI Factory scale, because thermal performance is no longer a single-component problem but a coupled, multi-physics and multi-constraint system.

At the facility level, MOGA can be used to simultaneously optimise competing objectives such as:

  • chip junction temperature and thermal reliability,
  • pumping power and fan energy,
  • coolant flow distribution and pressure drop,
  • waste-heat recovery potential,
  • and overall PUE and water usage effectiveness.

Rather than optimising individual components (e.g., a single cold plate or rack), MOGA enables co-optimisation across hierarchy levels.

5. Many of your published findings were at the component or rack level. What are the biggest scientific challenges when scaling thermal modelling from a single server to a multi-hectare AI Factory?

When scaling thermal modelling from a single server or rack to a multi-hectare AI Factory, the primary scientific challenge is bridging physics across vastly different spatial and temporal scales while preserving thermal fidelity.

At the component and rack level, heat transfer is dominated by highly localised phenomena—jet impingement, boiling or two-phase flow, micro-channel pressure losses, and strong conjugate heat transfer between cold plates and coolant. These processes require high-resolution CFD.

At the facility and campus scale, the key scientific difficulty lies in connecting these scales without excessive computational cost. Fully resolving chip-level physics across an entire AI Factory is infeasible, yet overly simplified models risk missing critical failure modes such as hot-spot formation or flow maldistribution.

Another major challenge is model uncertainty and variability. At large scales, boundary conditions require probabilistic or scenario-based approaches rather than single deterministic simulations.

6. Having taught thermodynamics, heat transfer and combustion theory, what academic concepts do you find yourself referencing most often in practical engineering conversations inside Firmus?

I find myself returning to a small set of core academic concepts. From thermodynamics, the ideas of energy balances, efficiency, and irreversibility. Particularly when assessing system-level performance, waste heat, or trade-offs between different cooling or power strategies.

From heat transfer, I most often reference conduction, convection, and thermal resistance networks, especially when translating component-level behaviour into system-level temperature rise, airflow requirements, or cooling margins. These concepts are essential when discussing why certain design changes matter and how they affect reliability and operating limits.

From combustion theory, even when combustion itself is not the focus, I frequently draw on principles such as mass and species conservation, mixing, and transport phenomena, which are directly applicable to exhaust dispersion, re-ingestion risk, and ventilation effectiveness.

7. What excites you most about contributing to the scientific and engineering foundations of AI Factories?

The opportunity to help shape a new class of infrastructure where physics, computation, and scale intersect in a fundamentally new way.

AI Factories operate at power densities and thermal loads that push well beyond traditional data-centre design assumptions. This creates a genuine need to revisit first principles: how heat is generated, transported, rejected, and ultimately constrained by reliability and sustainability. Being able to contribute to that foundational understanding, rather than applying incremental fixes, is particularly motivating for me.

AI Factories represent a rare opportunity where scientific insight directly translates into real-world impact.

They improve reliability, reduce energy and water use, and enable sustainable scaling of AI compute. Good physics leads to better infrastructure.

8. Firmus attracts system thinkers, experimentalists, and engineers who enjoy working without a blueprint. How would you describe this environment to other researchers who might be considering similar moves?

Firmus provides an environment where curiosity, first-principles thinking, and ownership matter more than predefined processes. Problems are not handed down in a fully formed way. Engineers are expected to explore, question assumptions, and help define both the problem and the solution.

For researchers, this creates a space that is both challenging and rewarding. You have the freedom to test ideas, combine theory with experiments or simulation, and iterate quickly as new constraints emerge. At the same time, it requires comfort with ambiguity and the ability to make sound engineering decisions even when the path forward is not obvious. I would describe it as a place where you are not just applying knowledge. You are actively helping to build the framework that future designs will follow.