New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

VPS providers with network speeds higher than 10 Gbps?

Hey everyone,
I'm currently using a Leaseweb VPS with 10 Gbps network in the Netherlands and it's been a great machine so far. The performance is solid and the unmetered incoming traffic is perfect for my needs.

However, I was wondering - are there any VPS providers out there offering even higher download speeds? I'm primarily interested in maximum download performance (incoming traffic), upload isn't as important for my use case.

What are your experiences with high-bandwidth VPS providers? Who has the highest actual speeds you've tested? Any recommendations for providers that can beat the 10 Gbps performance I'm getting now?

Thanks in advance for any insights!

Comments

  • conceptconcept Member
    edited January 17

    I would look at a cloud provider like Linode, they have 40gbit down but you are definitely not getting unmetered for cheap

  • tentortentor Member, Host Rep

    Any reason except of pricing to not get an actual physical machine at that point?

    Thanked by 1oloke
  • olokeoloke Member

    Please see a similar thread opened a few months ago:
    https://lowendtalk.com/discussion/209976/100-gbit-vps-providers/p1

    Personally, I don't think I ever saw unmetered 10gbps on LET for a price competitive to Leaseweb.
    However good luck with your search :)

    Thanked by 2mans_xd tentor
  • forestforest Member
    edited 2:15AM

    Very few small providers will be able to provide those speeds, not because the node cannot, but because of hypervisor overhead. Most hosts here will use virtio, which is a paravirtualization technique to minimize the overhead of passing network buffers from the guest to the host. But even with a fast processor, they will struggle to keep up with even 10 Gbps.

    When a packet is received on the NIC, the host has to do quite a few things:

    1. The NIC DMAs the packet from its onboard cache to the host kernel's ring buffer and fires off a hardware interrupt
    2. The host kernel's network stack processes the packet (bridging, routing, determining which guest it is intended for)
    3. As if doing DMA, the host copies the data from its buffers into the guest's memory where the guest's network driver will be expecting it (this uses extra unnecessary context switches between QEMU and KVM if vhost-net is not in use)
    4. KVM injects a virtual interrupt into the guest, raising #vmenter and waking the guest
    5. The guest's virtio-net driver reads the packet from memory, and passes it to the guest's network stack

    This process requires quite a few context switches which, when it's crossing the virtualization boundary, are very heavy (#vmexit and #vmenter are very slow, especially on modern processors with all those microarchitectural side-channel attack mitigations flushing caches and TLBs and branch predictor buffers). Using virtio (PV) rather than an emulated driver (HVM) greatly simplifies step 3, but it doesn't reduce the overhead of the other steps. I imagine most hosts here are going to be using virtio and vhost-net for networking, but that can only reduce overhead so much.

    The only way to achieve significantly higher bandwidth is if the host passes through the NIC using SR-IOV (Single Root I/O Virtualization), which is a feature that some NICs support that allow them to present themselves as multiple, independent PCIe functions that can all be individually passed through to the guest with VFIO. That requires more setup and more expensive hardware, but allows each individual VPS to effectively eliminate hypervisor overhead for network operations.

    When you have SR-IOV, the process is instead:

    1. The NIC receives the packet and determines, in hardware, which virtual function (virtual PCIe device) it is intended for
    2. Using IOMMU passthrough (configured via VFIO), the NIC uses DMA to put the packet directly into the guest's memory
    3. The NIC raises a hardware interrupt directly to the guest via interrupt remapping, raising #vmenter and waking the guest
    4. The guest's virtio-net driver reads the packet from memory, and passes it to the guest's network stack

    That is the only way to achieve extremely high speeds on a VPS. Huge cloud providers are going to be far more likely to support that than small providers.

    Thanked by 3nikio Kodomu RedDog
  • forestforest Member
    edited 2:21AM

    @tentor said:
    Any reason except of pricing to not get an actual physical machine at that point?

    Live migration, easier snapshots including memory state, hardware-agnostic kernel configuration, more available hardware choices (no need to limit yourself to hardware that supports IPMI, for example), checks off "cloud-powered infrastructure" on investor buzzword bingo. Besides that, not really.

  • KodomuKodomu Member

    @forest said:
    SR-IOV

    It's worth mentioning that most NICs only support a limited amount of these passthroughs per port, which is why this can't just be the standard. Most of the lower end intel ones only allow 8-16, there's some mellanox ones that allow a few hundred but I wouldn't suggest using it for more than about 64 VMs because there's other issues, I believe each VF becomes a PCIe device which can certainly cause trouble with the overhead of mapping IOMMU.

    You also lose a handful of the hypervisor's networking features.

  • forestforest Member

    @Kodomu said:

    It's worth mentioning that most NICs only support a limited amount of these passthroughs per port, which is why this can't just be the standard. Most of the lower end intel ones only allow 8-16, there's some mellanox ones that allow a few hundred but I wouldn't suggest using it for more than about 64 VMs because there's other issues, I believe each VF becomes a PCIe device which can certainly cause trouble with the overhead of mapping IOMMU.

    You also lose a handful of the hypervisor's networking features.

    Yep, and that's one of the reasons it's not as affordable for lowend hosts. It's not worth it buying multiple high-end NICs capable of 40+ Gbps and reserve a valuable VF for each client when half the clients will be idlers.

    I don't think IOMMU overhead is really much of an issue, though. Not on a modern system.

Sign In or Register to comment.