All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
VPS providers with network speeds higher than 10 Gbps?
Hey everyone,
I'm currently using a Leaseweb VPS with 10 Gbps network in the Netherlands and it's been a great machine so far. The performance is solid and the unmetered incoming traffic is perfect for my needs.
However, I was wondering - are there any VPS providers out there offering even higher download speeds? I'm primarily interested in maximum download performance (incoming traffic), upload isn't as important for my use case.
What are your experiences with high-bandwidth VPS providers? Who has the highest actual speeds you've tested? Any recommendations for providers that can beat the 10 Gbps performance I'm getting now?
Thanks in advance for any insights!
Comments
I would look at a cloud provider like Linode, they have 40gbit down but you are definitely not getting unmetered for cheap
Any reason except of pricing to not get an actual physical machine at that point?
Please see a similar thread opened a few months ago:
https://lowendtalk.com/discussion/209976/100-gbit-vps-providers/p1
Personally, I don't think I ever saw unmetered 10gbps on LET for a price competitive to Leaseweb.
However good luck with your search
Very few small providers will be able to provide those speeds, not because the node cannot, but because of hypervisor overhead. Most hosts here will use virtio, which is a paravirtualization technique to minimize the overhead of passing network buffers from the guest to the host. But even with a fast processor, they will struggle to keep up with even 10 Gbps.
When a packet is received on the NIC, the host has to do quite a few things:
This process requires quite a few context switches which, when it's crossing the virtualization boundary, are very heavy (#vmexit and #vmenter are very slow, especially on modern processors with all those microarchitectural side-channel attack mitigations flushing caches and TLBs and branch predictor buffers). Using virtio (PV) rather than an emulated driver (HVM) greatly simplifies step 3, but it doesn't reduce the overhead of the other steps. I imagine most hosts here are going to be using virtio and vhost-net for networking, but that can only reduce overhead so much.
The only way to achieve significantly higher bandwidth is if the host passes through the NIC using SR-IOV (Single Root I/O Virtualization), which is a feature that some NICs support that allow them to present themselves as multiple, independent PCIe functions that can all be individually passed through to the guest with VFIO. That requires more setup and more expensive hardware, but allows each individual VPS to effectively eliminate hypervisor overhead for network operations.
When you have SR-IOV, the process is instead:
That is the only way to achieve extremely high speeds on a VPS. Huge cloud providers are going to be far more likely to support that than small providers.
Live migration, easier snapshots including memory state, hardware-agnostic kernel configuration, more available hardware choices (no need to limit yourself to hardware that supports IPMI, for example), checks off "cloud-powered infrastructure" on investor buzzword bingo. Besides that, not really.
It's worth mentioning that most NICs only support a limited amount of these passthroughs per port, which is why this can't just be the standard. Most of the lower end intel ones only allow 8-16, there's some mellanox ones that allow a few hundred but I wouldn't suggest using it for more than about 64 VMs because there's other issues, I believe each VF becomes a PCIe device which can certainly cause trouble with the overhead of mapping IOMMU.
You also lose a handful of the hypervisor's networking features.
Yep, and that's one of the reasons it's not as affordable for lowend hosts. It's not worth it buying multiple high-end NICs capable of 40+ Gbps and reserve a valuable VF for each client when half the clients will be idlers.
I don't think IOMMU overhead is really much of an issue, though. Not on a modern system.