Getting Slow Disk Speed on RAID-1 (Only ~200 MB/s) – Anyone Else Facing This?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Getting Slow Disk Speed on RAID-1 (Only ~200 MB/s) – Anyone Else Facing This?

hostomegahostomega Member, Host Rep

I’m facing a strange issue on a new server from GorillaServers.

Issue:

When using RAID-1 (mdadm) on AlmaLinux 9, I get only ~200 MB/s write speed.

If I test each NVMe individually (no RAID), I easily get ~1 GB/s+.

I cross-checked with a friend who has a very similar setup — same result:
RAID-1 = ~200 MB/s
Single disk = 1 GB/s

The datacenter replaced the server, but the issue is still the same.
So I’m wondering if anyone else has experienced this?

Server Specs
• Asrock Rack B650D4U
• AMD Ryzen 9 9950X
• 2 × 1.92TB NVMe (Kioxia KCD6XLUL1T92)
• AlmaLinux 9
• VirtFusion Platform
• mdadm RAID-1 (default installer)

What I Tested
• dd test: ~200 MB/s on RAID-1
• On raw NVMe (no RAID): 1 GB/s
• FIO confirms that individual NVMe drives are fast
• Tried reinstalling different times
• Tried another identical server — same issue

Question

Is anyone else seeing this slow mdadm RAID-1 performance on AlmaLinux 9 with NVMe?

Is this:
• an AlmaLinux 9 kernel issue?
• mdadm RAID-1 sync/write bottleneck?
• something specific with these Kioxia enterprise NVMe drives?
• or something related to the B650D4U board?

Any insights would be appreciated.

-------------------- A Bench.sh Script By Teddysun -------------------
 Version            : v2025-05-08
 Usage              : wget -qO- bench.sh | bash
----------------------------------------------------------------------
 CPU Model          : AMD Ryzen 9 9950X 16-Core Processor
 CPU Cores          : 32 @ 5603.756 MHz
 CPU Cache          : 1024 KB
 AES-NI             : ✓ Enabled
 VM-x/AMD-V         : ✓ Enabled
 Total Disk         : 1.7 TB (359.0 GB Used)
 Total Mem          : 185.8 GB (55.0 GB Used)
 Total Swap         : 16.0 GB (4.4 GB Used)
 System uptime      : 0 days, 9 hour 54 min
 Load average       : 1.33, 1.46, 1.35
 OS                 : AlmaLinux release 9.6 (Sage Margay)
 Arch               : x86_64 (64 Bit)
 Kernel             : 5.14.0-570.62.1.el9_6.x86_64
 TCP CC             : cubic
 Virtualization     : Dedicated
 IPv4/IPv6          : ✓ Online / ✓ Online
 Organization       : AS53850 GorillaServers, Inc.
 Location           : Los Angeles / US
 Region             : California
----------------------------------------------------------------------
 I/O Speed(1st run) : 200 MB/s
 I/O Speed(2nd run) : 199 MB/s
 I/O Speed(3rd run) : 199 MB/s
 I/O Speed(average) : 199.3 MB/s
----------------------------------------------------------------------

Comments

  • I want to know about this. :|

  • MikeAMikeA Member, Patron Provider
    edited 7:56PM

    I don't know but my whole setup runs Almalinux 9/10 with mdadm raid but I've never used Kioxia and only use onboard M.2 generally. I've never had an issue like this with the WD/Samsung consumer drives I mostly use. Multi-gig writes in raid-1, and resyncs with 1-2GB/s sustained. Might just have something to do with the fact these Kioxia models are for read-intensive workloads, and there's some quirk with mdadm raid and a drive/partition setting with them. Compare the 30K write IOPS max with the typical non-DC NVMe I use that have like 1M-1.2M write IOPS max. Maybe you're hitting some bottleneck with the write IOPS and something with the mdadm raid. Tldr you should rule out the operating system and motherboard really.
    Edit: AI says some mdadm bitmap policy could be the issue with but I don't know :D Wish I had one to test on though, would be interesting to know the cause.

    Thanked by 1hostomega
  • hostomegahostomega Member, Host Rep

    @MikeA said:
    I don't know but my whole setup runs Almalinux 9/10 with mdadm raid but I've never used Kioxia and only use onboard M.2 generally. I've never had an issue like this with the WD/Samsung consumer drives I mostly use. Multi-gig writes in raid-1, and resyncs with 1-2GB/s sustained. Might just have something to do with the fact these Kioxia models are for read-intensive workloads, and there's some quirk with mdadm raid and a drive/partition setting with them. Compare the 30K write IOPS max with the typical non-DC NVMe I use that have like 1M-1.2M write IOPS max. Maybe you're hitting some bottleneck with the write IOPS and something with the mdadm raid. Tldr you should rule out the operating system and motherboard really.
    Edit: AI says some mdadm bitmap policy could be the issue with but I don't know :D Wish I had one to test on though, would be interesting to know the cause.

    I am also using same Motherboard and on board NVMe in another DC. I am getting 3 GB/s speed.

  • zakkuunozakkuuno Member

    check cat /proc/mdstat

    mdadm’s probably initializing the raid1 in the background

    Thanked by 3ralf hostomega PuDLeZ
  • PuDLeZPuDLeZ Member

    Dumb question but are you sure the disks completed syncing after setting up the raid1 and you testing? Not sure with almalinux but standard will be something like 'cat /proc/mdstat' without the quotes.

    If it's fully synced, can you try the single disk mode again and use tmux/screen/multiple shells to test both drives at the same time to see if that causes the same issue to happen?

    Lastly, can you temporarly use a different distro (debian/etc) to see if it experiences the same issue? Basically just to confirm if it is or isn't almalinux9 specific.

    Thanked by 1hostomega
  • PuDLeZPuDLeZ Member

    @zakkuuno said:
    check cat /proc/mdstat

    mdadm’s probably initializing the raid1 in the background

    hehe, got distracted with work stuff and didn't refresh the page before typing/posting my comment. Honestly, I'd bet this is the reason. I know when I first started using mdadm and raid1/5/10 on drives, I was annoyed that it took "forever" to sync the disks/array.

  • hostomegahostomega Member, Host Rep

    @zakkuuno said:
    check cat /proc/mdstat

    mdadm’s probably initializing the raid1 in the background

    I have been facing this issue for a week. So no chance for raid sync :smile:

  • hostomegahostomega Member, Host Rep

    @PuDLeZ said:
    Dumb question but are you sure the disks completed syncing after setting up the raid1 and you testing? Not sure with almalinux but standard will be something like 'cat /proc/mdstat' without the quotes.

    If it's fully synced, can you try the single disk mode again and use tmux/screen/multiple shells to test both drives at the same time to see if that causes the same issue to happen?

    Lastly, can you temporarly use a different distro (debian/etc) to see if it experiences the same issue? Basically just to confirm if it is or isn't almalinux9 specific.

    It's not a raid sync issue. For your satisfaction, I am sharing again :D

    [root@104 ~]# cat /proc/mdstat
    Personalities : [raid1] 
    md124 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
          1049536 blocks super 1.0 [2/2] [UU]
          bitmap: 0/1 pages [0KB], 65536KB chunk
    
    md125 : active raid1 nvme0n1p3[0] nvme1n1p3[1]
          1047552 blocks super 1.2 [2/2] [UU]
          bitmap: 0/1 pages [0KB], 65536KB chunk
    
    md126 : active raid1 nvme0n1p4[0] nvme1n1p4[1]
          1856348160 blocks super 1.2 [2/2] [UU]
          bitmap: 8/14 pages [32KB], 65536KB chunk
    
    md127 : active raid1 nvme0n1p1[0] nvme1n1p1[1]
          16776192 blocks super 1.2 [2/2] [UU]
    
    unused devices: <none>
    [root@104 ~]# 
    
Sign In or Register to comment.