GPU Servers
A new generation of servers, equipped to meet the new challenges of AI.Achieve More, Spend Less.
Intel® Core™ i5-13500
once-off setup €79.00
13th Generation / for AI inference
Gen 3
Unlimited traffic
up to 254 IP Address
Germany
about 5 - 6 workdays
Intel® Xeon® Gold 5412U
once-off setup €79.00
4th Gen / for AI training
Gen 4
Unlimited traffic
up to 254 IP Address
Germany
Dual Intel Xeon Gold 6226R
once-off setup €2.300
2nd Generation / for AI training
1.5 TB DDR4 ECC 2933MHz
System Storage
up to 50 Gbit/s unmetered and guaranteed
up to 254 IP Address
or 8x 1.92TB SSD SATA Soft RAID
France, United Kingdom, Germany, Poland, Canada
about 5 - 6 workdays
Dual AMD Epyc 9354
once-off setup €2.970
4d Generation / for AI training
2.3 TB DDR5 ECC 4800MHz
System storage
20Gbit/s unmetered and guaranteed
up to 254 IP Address
up to 4 × 15.36TB SSD NVMe Soft RAID
France
about 1 workdays
Given their potential, recent artificial intelligence (AI) innovations are re-shuffling the deck in the cloud market.
To help you keep up with AI trends, our dedicated servers are equipped with high-performance NVIDIA GPU cards that come with advanced features
These give you the resources to deploy your data inference, deep learning and machine learning projects.
AI inference means using a trained AI model that analyzes new data and makes predictions or decisions based on it.
192 Tensor Cores in the Nvidia RTX™ 4000 SFF Ada Generation GPU, significantly increasing the efficiency of AI inference operations.
Have large storage capacities — up to several terabytes, with a private network of 25 to 50 Gbps included.
Servers: equipped with NVIDIA L40S cards; 91.6 TFlops FP32, 366 TFlops TF32, 733 TFlops FP16, 1466 TFlops FP8.
Scale-GPU servers: equipped with NVIDIA L4 cards; 30.3 TFlops FP32, 120 TFlops TF32, 242 TFlops FP16, 485 TFlops FP8.