Navin Bishnoi, Country Head, Marvell India and AVP Engineering (Compute and Custom Solutions), in an interaction with Sudhakar Singh, Editor, Industry Outlook, shares his views on the evolution of Data Processing Units (DPUs) market, the challenges in migrating legacy applications to leverage the capabilities of DPUs, and why DPU designers and manufacturers must know how to get things done right the first time so that the overall system is ready for the market and more.
The market for Data Processing Units (DPUs) has experienced consistent growth due to increased demand for technologies such as artificial intelligence (AI), machine learning, deep learning, the Internet of Things (IoT), and 5G. How do you see the current evolution of this market?
DPUs have demonstrated their value in the marketplace. They can perform networking, storage, and security, better than some of the traditional ones, both on performance per watt, as well as performance per dollar basis. Although DPUs started with the top-tier clouds, they are now part of the 5G business. Also, you will see them migrating to the server networking OEMs, tier two clouds, and others.
Primarily, at the top level, the DPU market can be classified into three segments. The first and most interesting segment is Telecom at this moment. It is growing owing to the need to build the 5G infrastructure with better power performance. The second category is the cloud DPUs. This is an attractive market, where most have their customized design.
Another use case within the cloud is for dedicated security devices, which are gaining momentum. We have a LiquidSecuritysolution for the cloud, which essentially performs the encryption and key management. It has an optimized DPU inside it. The third market is the enterprise, where we look at having OCTEONas as part of the firewalls and as part of the OEM equipment as well. So overall, DPU is growing significantly in the marketplace, beyond the traditional cloud or 5Gand into the other domains as well.
What are the challenges in migrating legacy applications to leverage the capabilities of DPUs, and what strategies can be employed to overcome these obstacles while maximizing performance gains?
The DPUs are a combination of general-purpose processors, and specialized accelerators added to it. Most of the legacy applications are typically written for general-purpose processors using the traditional x86 architecture or others. The challenge was in adapting the code base that was there on that architecture or platform and leveraging the specialized hardware acceleration capabilities of the DPUs.
It needed the engineers to have advanced knowledge in specific areas because there would be a crypto engine, the compression algorithm, etc., and hence the knowledge would be required to build software and other applications on top of it. What has happened in the industry and the common strategy is that there has been an involvement of the open-source community, which is supported by software libraries, while DPDK and SPDK make it easy for engineers to write applications for DPU and then make them available on these libraries. Later, the DPU vendors ensured that the specialized accelerator functions have the open-source software libraries, pick up from it and then build further from there. So, that has been a good evolution that has happened recently in solving those legacy challenges that we saw in the past.
The technology landscape is highly competitive, and time-to-market is crucial for DPU manufacturers. Accelerating the development, testing, and production cycles while maintaining quality standards is essential to stay ahead of competitors and meet customer demands. What strategy do you propose for this to be followed by the DPU manufacturers?
Yes, it is true for the turnaround time. However, it is not just that, the overall cost to develop DPU is significantly high. That is where it is very important for all the DPU designers and manufacturers, to know how to get things done right the first time so that the overall system is ready for the market with the differentiation that we are talking about.
Being on time with quality is being addressed by many factors. In a typical chip design world, a lot of design activities are undergoing a shift, where we push things early in the game and try to do a co-design to start with. Then we move forward to start working on the architecture or the early design phase or an exploration space. Further, we try to evolve the main architecture decisions early, and then have a smooth implementation and sign-off for the manufacturing test phase. It is part of what we call a continuous development to do things right the first time, and then make DPUs available on time with quality.
How do you foresee DPUs evolving in the future to address emerging challenges in modern computing?
The first advantage is that they reduce energy consumption. While the waste heat generated from energy consumption adds space, less heat helps in adding more hardware into a single rack, which reduces the processing time, and, ultimately, the cost of the specialized accelerators that we are building. Today, the world is discovering the limitations of general-purpose processors. Customers will ultimately move to architectures that are around a portfolio of specialized processors and DPU is a great example of it.
Another example is AI. We will see that AI models will continue to grow in complexity almost 10 times a year or lesser than that. And that is unsustainable without specialized processing, which again, creates the need for a DPU. Finally, we want to consume less energy, have small boxes that can fit into tight spaces, and require a very limited human intervention and that is what would be the norm and DPU will be a part of the equation to realize that goal.
We use cookies to ensure you get the best experience on our website. Read more...