A new generation of computers is beginning to hit the market and profiling them suggests that you should expect a profound reshaping of the enterprise hardware landscape to take place in the months and years to come. The technological advancements that have emerged over the past two years have been driven by the rise of generative AI and large-scale inference, and how AI is reshaping hardware is becoming more evident as we reach the midpoint of 2025. At Compliance Standards we have begun to track the sustainability features of new tech devices, rating them along the number of criteria that give us a glimpse of how they are designs and how different they are compared to the previous generations of systems, and our findings are startling.
Looking at the commercial PCs and systems that were announced in the month of March 2025, we note how deeply embedded the AI capabilities are and how they are now becoming standard in mainstream enterprise computing hardware. While in the previous computing landscape, the purchase of a new PC would be considered an upgrade or a refresh, this new generation of products is a deliberate move toward a new kind of hardware capable of performing on-device AI inference, local model support, and hybrid AI workflows once exclusive to datacenters. AI inference essentially means that AI models will be directly running on a device without the requirement of sending data to a remote server or cloud for processing. The next PC that you will purchase will be able to make predictions using a trained AI model to do such things are recognizing a face, translating text, and filtering spam. The shift is very serious because there are so many implications and in particular the ones who must be on top of this are corporate security folks. Companies and professionals who are on the downstream end of the lifecycle of these systems, such as ITAD firms, recyclers, refurbishers and others, also will have to be closely monitoring how OEMs and manufacturers are designing those devices to prepare for what’s to come four to five years from now.
During the month of March 2025, we looked at nine newly released systems targeted at the enterprise market. We noticed that at the heart of this transformation leading to these devices is a shift in the so-called silicon architecture. We looked at devices like the HP ZBook Fury G1i, Dell Pro Rugged 14, Lenovo ThinkPad X13 Gen 6, and Lenovo ThinkBook Flip AI and noticed that they all integrate the latest Intel Core Ultra or AMD Ryzen AI processors, which include neural processing units (NPUs) designed to handle AI tasks with greater efficiency than traditional CPUs or GPUs. Engineers call these components “AI accelerators”, which are a new breed of silicon that are designed and optimized to handle natural language processing, providing the basis of real-time image enhancements and predictive system behaviors. What’s different from the previous generations of systems is that these real time enhancements are happening directly on the device. The systems that we are used to, like the laptop I am using to write this analysis, relied entirely -and still do- on cloud-based AI compute or offloaded GPU acceleration. Like most people, I increasingly rely on the efficiency of ChatGPT, which means using cloud processing. But these new machines bring AI capabilities to the edge.
Looking at the nine devices we tracked in March 2025 shows that what’s coming will be different because not only is the NPU embedded in the processor, but AI responsiveness is now increasingly part of the OS and user experience. Think of features like background blur during video calls, dynamic power and thermal management, and context-aware notifications, they will no longer be handled by so-called “brute force” through software but will be made possible with AI routines running locally on the chip. One of the products we analyzed, the ThinkBook Flip, in particular, is an example of this class of new devices with its AI-enhanced user interface — an area we believe will become more important in the years to come as Microsoft, Google, and enterprise software vendors build deeper OS-level integration for AI assistants and copilots.
There are also new-frontier products, boasting extremely high performance needed by enterprises that require high computing capability. The two systems we analyzed in March 2025 are NVIDIA DGX Spark and Dell Pro Max with GB300. They should be considered desktop AI supercomputers. These are not just standard business laptops with enhanced efficiency, as you would normally expect, but instead, they are compact workstations capable of handling large language models with billions of parameters. Of course, NVIDIA is not known for being a PC maker, but its DGX Spark, built on NVIDIA’s Grace Blackwell platform, delivers an incredible petaFLOP-scale performance in a desktop form factor. Such a system can perform at least one quadrillion floating-point operations per second (FLOPS). Traditionally you see this kind of computing performance in supercomputers, high-performance data centers or cloud AI but now NVIDIA has brought it into a desktop form factor targeted at professionals in need of high-performance systems, such developers, researchers, and enterprise AI teams who need local inferencing power without always relying on cloud GPU clusters.
Servers are also seeing radical development driven by AI. We looked at the HPE ProLiant DL110 Gen12 and we are seeing a new class of data-center-adjacent infrastructure designed to balance general-purpose compute with AI features such as telemetry, smart workload orchestration, and edge deployments. While this HPE ProLiant does not offer the raw AI acceleration of the NVIDIA DGX Spark, it is built with AI-aware deployment models in mind. For example it can perform remote data acquisition, model pre-processing, and modular expansion for future AI inference tasks. This makes it a foundational piece for organizations rolling out AI at scale — not just in central offices, but in industrial settings, factories, campuses, and hybrid environments.
In analyzing these nine new systems, one major concern across all these devices is energy consumption. While these systems feature more efficient AI processing, they do not necessarily lower overall energy use. In theory, AI NPUs are build to be more efficient than general-purpose CPUs or GPUs. But the way these systems work as a whole, means that their overall workload may result in higher sustained power draw across the enterprise fleet. Devices like the ZBook Fury G1i, which has discrete GPUs and high-frequency CPUs, end up consuming more energy under load, but AI workload balancing can reduce idle power consumption. Even more, systems like the DGX Spark are energy-intensive and optimized for short bursts of local inference, and they are unsuitable for environments with strict energy limitations and constraints.
Interestingly, OEMs are not entirely ditching the traditional platforms. We did find in our reviews new and upcoming systems that rely on the conventional x86 architecture and have no onboard AI acceleration. The HP EliteDesk 800 G9 is one of them. It is a well-built and serviceable business desktop, but AI is not a feature. It offers strong security features and serviceability — and performs well for traditional enterprise tasks — , but it is not positioned as an AI system due to its lack of NPU or AI-tuned architecture. For HP, this product may address a specific market, but it is likely positioned as a short-term solution because it is likely to fall behind as AI-enabled workflows become standard.
Yet all in all, this wave of releases suggests that AI functionality is going to be standard and not just a speculative feature. In the months ahead, we can expect these systems to be marketed by EOMs using a variety of key phrases like “enablers of hybrid work”, “enhanced collaboration”, “creative productivity”, and “secure AI on the edge”. This means that enterprise procurement managers will have to reassess how they acquire such systems. In the foreseeable future, procurement specialists will likely experience an environment where customer-facing, analytical, or creative staff will require AI-capable systems while static and administrative roles could continue to rely on traditional systems. And that’s what these new waves of systems are telling us.
If you are an enterprise IT buyer, you should begin to segment the refresh cycles accordingly: you must evaluate devices not only by processing speed or security compliance, but also by their AI readiness. Can this device run AI models locally? Does it support the enterprise’s chosen AI platforms? Will its AI features improve worker efficiency, or simply drain energy for unused functions? These are new skill sets that will require fresh training and better understanding of the technology involved.
From an ITAD and lifecycle perspective, these systems will certainly bring new challenges. Components like NPUs and integrated AI firmware will require a different form of secure handling during disposition and resale. Some AI-enabled devices may have longer functional lifespans due to their responsiveness and feature support, while others may become obsolete faster if software support for their AI hardware is not maintained.