Key Points:
• Lenovo has introduced the ThinkEdge SE100, an entry-level AI inferencing server designed for edge computing.
• The SE100 is compact, measuring 85% smaller than a standard 1U server, and has a power draw of under 140W.
• The server uses Lenovo’s Neptune liquid cooling technology, reducing noise and power consumption, making it suitable for public spaces.
Lenovo has announced the ThinkEdge SE100, an innovative edge AI inferencing server designed to make artificial intelligence (AI) more accessible to small and medium-sized businesses, as well as enterprises. Traditionally, AI systems are associated with large, powerful servers, but the SE100 takes a different approach. By focusing on inferencing, a less compute-intensive aspect of AI processing, Lenovo has created a compact server that can run AI at the edge, reducing latency and data transferred to the cloud.
The SE100 is part of Lenovo’s new ThinkSystem V4 family of servers, designed for hybrid cloud deployments. It features Intel Xeon 6 processors and Lenovo’s Neptune liquid cooling technology, which reduces noise and power consumption, making it suitable for public spaces. The server is also designed to be compact, measuring 85% smaller than a standard 1U server, with a power draw of under 140W, even with a GPU-equipped configuration.
This innovative design enables the deployment of AI in diverse environments, such as retail stores, restaurants, or transportation systems, where access to large data centers or cloud infrastructure may be limited. The SE100’s compact size and low power consumption make it an attractive option for businesses looking to bring AI to the edge, without the need for significant infrastructure overhaul.
"Lenovo is committed to bringing AI-powered innovation to everyone, with continued innovation that simplifies deployment and speeds the time to results," said Scott Tease, VP of Lenovo infrastructure solutions group, products. "The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is easily tailored to diverse business needs across a broad range of industries. This unique, purpose-driven system adapts to any environment, seamlessly scaling from a base device to a GPU-optimized system that enables easy-to-deploy, low-cost inferencing at the Edge."
Read the rest: Source Link
You might also like: How to get Windows Server 2022, Try Windows 11 Pro for Workstations & browse Windows Azure content.
Remember to like our facebook and our twitter @WindowsMode for a chance to win a free Surface every month.
Discover more from Windows Mode
Subscribe to get the latest posts sent to your email.