Skip to main content News, discussion, and community support for Synology devices
r/synology icon

r/synology

183K members
89 online


Autocategorize payments, manage expenses in a few clicks. Master money with easier expenses. Get up to 90% off now. QuickBooks: Start Today.


Warning to users with QuickConnect enabled Warning to users with QuickConnect enabled
Networking & security
https://preview.redd.it/warning-to-users-with-quickconnect-enabled-v0-uylx136keorf1.png

For those of you with QuickConnect I would HIGHLY recommend you disable it unless you absolutely need it. And if you are using it, make sure you have strong passwords and 2FA on, disable default admin and guest accounts, and change your QuickConnect ID to something that cannot be easily guessed.

I seems my QuickConnect name was guessed and as you can see from my screenshot I am getting hit every 5 seconds by a botnet consisting of mostly unique IP's, so even if you have AutoBlock enabled it will not do you much good. This is two days after disabling QuickConnect entirely and removing it from my Synology Account. Not sure if I need to contact Synology to have them update the IP of my old ID to something else like 1.1.1.1 for it to stop.

To clarify, they still need a password to do any damage, but this is exactly what they were attempting to brute force. Luckily it seems like they didn't get anywhere before I disabled QuickConnect.


Made a tool to auto-wake my Synology when I open my MacBook Made a tool to auto-wake my Synology when I open my MacBook
Tutorial

My Synology goes to sleep to save power, but I was tired of waiting for it to wake up every time I opened my MacBook.

The problem: I want my NAS to sleep (saves power, quieter, extends drive life), but I don't want to wait 30 seconds every time I need to access my files. Manual WoL apps require me to remember to wake it. I wanted it automatic.

The solution: Built a simple macOS tool that sends a WoL packet automatically when your Mac wakes, so your NAS is ready by the time you need it.

Setup:

brew tap dgeske/tap
brew install wake-my-nas
wake-my-nas --discover
wake-my-nas --edit
wake-my-nas-install-service

Done. Your Synology wakes automatically now.

Features:

  • Only runs on your home network (optional)

  • Checks if NAS is already awake (avoids redundant packets)

  • Built-in device discovery

  • Works with any Synology model that supports WoL

  • macOS notifications if something goes wrong

GitHub: https://github.com/dgeske/wake-my-nas

Free and open source. Saves me 30 seconds every single time I open my laptop.

If you find this useful, a GitHub star would help! Trying to get it into Homebrew Core so it's easier for others to discover. Need to show enough people find it valuable.

Happy to answer questions or help with setup!


updating homeassistant on a nas updating homeassistant on a nas
NAS Apps
updating homeassistant on a nas

I needed to update homeassistant i have it on a docker container on my ds412+ synology. Deleted the old container downloaded a new one with the same config. Now if i try to start it it says:

RNDGETENTCNT on /dev/urandom indicates that the entropy pool does not have enough entropy. Rather than continue with poor entropy, this process will block until entropy is available.

How the hell do i fix this chatgpt is completely useless.

Help is really appricited



Replacement for DS218j Replacement for DS218j
NAS hardware

I've been using the DS218j for ages, I think I've bought it in 2018 and never switched it off since then.

But I noticed it will stop receiving updates after the DSM 7.3 release, and I don't want to have unpatched system, so finally it's a time for upgrade.

I really like how quiet DS218j -- I can hear it only when I access some data on the HDDs, all other time it's completely silent -- I'm using the "low power mode" for fan.

What should I buy to replace it, that will be as quiet as the old NAS? I'm thinking about DS225+, as it has better features (that I'm not sure if I need), but looks like it doesn't have the low power mode.



[Advice Request] DS1512+ to ??? - Warning: Long winded question [Advice Request] DS1512+ to ??? - Warning: Long winded question
NAS hardware

I have a DS1512+ that’s been running without any major issues for over 12 years. I currently do not see a need to upgrade, though I sometimes worry about relying on hardware that’s older than my children. :)

Every year in October I start to think about Black Friday deals and if this is the year to upgrade / replace the NAS?

I’ve enjoyed my time with Synology and how I’ve been able to “set it and forget it” all these years. That said, I work in IT and am comfortable using less mature options like Ugreen, TrueNAS, or maybe even QNAP? Whatever I upgrade to, I’m focused on maintaining my current functionality (listed below) and upgrading my available storage, either by replacing my 5 bay NAS with a 6 bay NAS or upgrading a few of the installed HDDs in a 5 bay NAS from 8TB to 20TB or larger.

Current NAS Overview:

  • DS1512+

  • Running 24/7 since 2013

  • Upgraded RAM from 1GB to 3GB in 2013

  • Running latest available DSM (DSM 6.2.4-25556 Update 8)

  • Running 1 storage pool with SHR

    • Currently have 5 8TB drives installed, giving me 29 TB total

    • Oldest drive has been running 68000 hours (7.7 years)

    • Newest drive has been running 600 hours (1 month)

    • Currently have 3.5 TB free/available

NAS Primary Use:

  • Backup for personal files for 2 family members

  • Time Machine backup for 2 Macs

  • Plex server for local use only with single Plex client (Apple TV 4K)

    • No transcoding required

    • All media streams as Direct Play or Direct Stream

  • Cloud Sync backing up personal Dropbox and Google Drive

  • Rclone server for remote Plex server

  • DHCP server for local network

  • DNS server forwarding to pihole for local network

  • Docker running pihole for local network

NAS Occasional Use:

  • Git Server for random projects

  • Tailscale for 1-2 times a year I need to access NAS from remote location

  • Docker running FreshRSS

    • This is still a work in progress

  • Docker running other images for random projects

In anticipation of this post being shredded due to me running legacy DSM in 2025:

Backup strategy:

  • Offsite of 5TB of personal files to Backblaze using Hyper Backup

  • My family's photos are backed up to Synology, Google Photos and iCloud

  • I maintain a list of Plex Media saved to NAS via remote hosted Sonarr and Radarr

    • I don't view Plex Media files as unobtainable in the event of total NAS failure

    • Since I have a backed up list the media that is important to me, I can get it again

Security Setup:

  • Admin account disabled

  • 1 DSM account enabled, with MFA

  • 1 SSH account enabled

    • Password login disabled

    • Requires key pair

    • Account has read-only access to media folder

  • All ports from internet blocked, with one exception

  • I allow access to port 22 of NAS to a single public IP assigned to a VPS I own

  • VPS runs a public facing Plex server for personal and family use

  • Plex server has a rclone mount of media folder on NAS

  • Plex server mounts folder on NAS using SSH account, logging in using key pair


Introducing DSM 7.3 — Now With “Drive Freedom".... Again! Introducing DSM 7.3 — Now With “Drive Freedom".... Again!
DSM

Good news, everyone! Synology heard your feedback on drive restrictions... and after briefly reinventing DRM for hard drives, they’ve decided maybe that was a bit much.

Introducing DSM 7.3, now featuring:

  • Drive Freedom ™ – You’re once again allowed to use the drives you already own. Revolutionary. We call this “listening to customers,” not “reversing a PR inferno.”

  • Minimalist Media – We streamlined transcoding by removing it entirely. If your Plex buffers, that’s just modern art.

  • Container Chaos – Docker got renamed, re-skinned, and partially broken, but hey, new icons!

  • Photo Regression Pro Edition – All your missing features are now consolidated into one clean, simplified absence.

  • Surveillance Station Deluxe – Comes with two cameras, plus the spiritual exercise of buying more licences.

And remember:
When you strip features faster than competitors add them, that’s not regression — it’s focus.

DSM 7.3 — “It just works.” On approved drives. Until it doesn’t.


We’ve recently noticed some unofficial channels reselling the Chinese version at marked-up prices.





WD 20% discount codes WD 20% discount codes
Solved

PM’ed the first two commenters in the thread.

Dont know if it’s allowed or not but I have a couple of 20% discount codes for the WD shop.

They expire in 60 days and would like to give them away to the first two (one each) who needs them and posts here as I don’t have any use for them.

They are valid in the following countries:

Austria, Belgium, Bulgaria, Canada, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Japan, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, United Kingdom, and United States.



DSM 7.3 is an LTS and will receive updates until October 2028, ensuring 9-12 years of support for older models, doubling what UGREEN offers. DSM 7.3 is an LTS and will receive updates until October 2028, ensuring 9-12 years of support for older models, doubling what UGREEN offers.
DSM

Synology updated their software lifecycle document with DSM 7.3, confirming that it will be supported until October 2023:

DSM 7.3

General Availability: October 2025

End of Maintenance Phase: October 2027

End of Extended Life Phase: October 2028

Source: https://global.download.synology.com/download/Document/Software/WhitePaper/Os/DSM/All/enu/Software_Life_Cycle_Policy_enu.pdf

Based on the latest generation that will only support DSM 7.3, it seems the total duration for software support will be between 9 years for 19 Series and 12 years for 16 Series.

Meanwhile it seems that UGREEN only offers 5 years of software & security updates:

https://support.ugnas.com/knowledgecenter/#/detail/eyJ0eXBlIjoiMnJndTFmIiwibGFuZ3VhZ2UiOiJlbi1VUyIsImlkIjo1MTAsImFydGljbGVJbmZvSWQiOjIxNSwiY2xpZW50VHlwZSI6IlBDIiwiYXJ0aWNsZVZlcnNpb24iOiIxLjAifQ==

I never saw this being discussed much. Synology does offer twice the duration of UGREEN. Honestly, 5 years for a NAS product seems quite low.




🔥 ATOM TOKYO -Shinjuku- | Japan's #1 Dance Club | 4,000 Weekly Guests | Open 10PM | Happy Hour: FREE Entry until 11PM (Weekends), 12AM (Weekdays)!


Error: “An error occurred in the hypervisor library [The server refused connection]” when backing up VM to Synology Error: “An error occurred in the hypervisor library [The server refused connection]” when backing up VM to Synology
DSM

Hi everyone,

I’m trying to back up my VMware virtual machine to my Synology NAS, but I keep getting this error:

I’ve already checked the network connection and credentials. The VM appears online in Synology, and Synology is able to retrieve all VMs from the ESXi host.

Has anyone faced this issue or knows what settings might cause it?

https://preview.redd.it/error-an-error-occurred-in-the-hypervisor-library-the-v0-s90ceg4vx8uf1.png

Import data from an old DLINK NAS Import data from an old DLINK NAS
NAS hardware

Hello all,

I have to transfer about 3To of data from an old DLINK NAS (1560), there are 4 physical disks (total 12To) in it that have been combined into ONE pool (no backup), and of course I have to keep the same physical HDD

I don't know on which physical disks the data is stored, I don't know with fileformat and so on so I'm not really confident pu pulling out the 4th disk from the DLINK, putting it in the synology, format it and try to copy the dlink into this new hdd.

Of course I don't think that taking the disks out of the DLINK and putting them in the DS925+ that I have will work so i wanted to know if you guys could think of something.

Either a cloud service for 3T for one day or someway to check on which disks the data is on the DLINK.

Thanks



Synology wants $535 for a 1.6TB PCIe Gen3 SSD… what planet are they on? Synology wants $535 for a 1.6TB PCIe Gen3 SSD… what planet are they on?
NAS hardware

So I was looking at Synology’s new SNV5400 NVMe SSDs, and I honestly can’t believe the pricing.
They’re asking $535 USD for a 1.6TB PCIe Gen3 drive — yes, Gen 3, in 2025.

Even worse, the smaller ones are almost as bad:

  • 400GB for $175

  • 800GB for $285

For reference, you can get a WD Red SN700 2TB for around $210, with double the endurance (5100TBW vs Synology’s 2900TBW). Or a solid PCIe 4.0 drive like a Crucial P3 Plus 2TB for under $100.

And yet Synology somehow thinks these are “enterprise NAS” drives that justify being 3–5x the price of literally everything else.

The real kicker is that newer Synology NAS units are now restricting drive compatibility — basically only Synology-branded SSDs are “approved.” If you install anything else, you get “unsupported drive” warnings or those annoying orange triangles of doom.

They call it “purpose-built for Synology NAS” and claim “rigorous testing” and “power loss protection,” but come on — it’s PCIe 3.0 in 2025. Nothing about that is premium.

It just feels like Synology is turning into Apple — forcing overpriced proprietary parts on users who just want reliable storage.

Why can’t we just use our own NVMe drives without getting punished for it?


Synology + Jellyfin lags on Sony Bravia but works if switched to "external" VLC player Synology + Jellyfin lags on Sony Bravia but works if switched to "external" VLC player
NAS Apps

I'm running Jellyfin server on a Synology DS224+ with 10GB Ram and playing it on my Sony Bravia Google TV. Lots of things lagging, playing a few seconds then buffering. Weird because if I set the client to "external player" and use VLC, it works just fine on the TV, but Jellyfin client can't handle it. Any ideas?

I feel like things were working okay for like a week, but not sure if I just hadn't played videos of a certain type. I did setup transcoding for my docker/Jellyfin setup and, again, I thought things were running great on the TV, but now it is terrible and only works on VLC. What can I try?


Filebot permission issues Filebot permission issues
NAS Apps

I'm having trouble getting Filebot to work on my Synology NAS. I keep hitting permission denied every time I try to open the "Storage" folder in Filebot. I've got Filebot in a single Docker container along with several other images like gluetun, deluge, jackett, etc. They all use the exact same PUID and PGID in the YAML. I've also cross-checked the permissions for the mapped Storage folder (in File Station) and it's got full read/write access for the user that correlates to thePUID from the YAML file. I'm stumped here, so any suggestions that anyone has would be highly appreciated.


Best strategy for backup Best strategy for backup
NAS Apps

Hi,

I just bought 224+ with 2X20Tb in raid 1. I have actually 8TB of data. However my external USB drives are currently 12 TB. I am rotating the external USB drives with my mother house. So I have an external backup. What should be the correct config? I don't want a new full backup every nights.

Should I encrypt the disk?

On my 218J, I had a local copy and a local backup. Local copy was for videos and photos and local backup was for the settings. Now seems the interface did change.

https://preview.redd.it/best-strategy-for-backup-v0-3cet31mp2huf1.pnghttps://preview.redd.it/best-strategy-for-backup-v0-da1l3tks2huf1.pnghttps://preview.redd.it/best-strategy-for-backup-v0-q5exquxv2huf1.pnghttps://preview.redd.it/best-strategy-for-backup-v0-6uvbaczy2huf1.png

Thanks.


Cloud Sync log column width will not change Cloud Sync log column width will not change
DSM

I know how the column width works, in Log Center (for example) all is well: hover over the column heading divider, the double-vertical-bar with left/right arrows appears, click and drag. But in the Cloud Sync package, log display, this does not work. The double-bar-arrows appears but upon click it cancels and acts like the column heading was clicked to sort. There is also a forum entry about this, more than a year old: https://community.synology.com/enu/forum/1/post/188046 but nothing helpful there. Anyone have any ideas?


Still using Proton? Get 87% off Internxt, a fully audited, open-source cloud storage with post-quantum zero-knowledge encryption that actually works!


DS423+ "Share with others" and "Download File" fail consistently if the file is larger than ~100mb. DS423+ "Share with others" and "Download File" fail consistently if the file is larger than ~100mb.
NAS hardware

I've been dealing with this since I got the DS423+, and I haven't been finding successful hits searching on my own. Everything that has "thanks that helped" answer has the main comment deleted.

System is a DS423+ with 10GB ram

Issue: If I use the "share with others" feature through DSM by right clicking a file, hitting share, and generating a link, that link will fail 9 times out of 10 for my shared file partner if the file is larger than a dozen megabytes. It also fails if I use file station myself and right click hit download. The downloads always start, but never complete.

Details: These are all remote connections. I don't think it's a port forwarding issue, as the same thing happens while using talescale, through quickconnect, or without. Multiple machines and connections all have the same issue. Uploading files doesn't error, I have uploaded many large files across the year of ownership without complaint. However sharing them once they get there is so unreliable I think it might just be a vault for future generations instead of a useful collaboration tool.

Steps I've taken: disabling quickconnect, disabling tailscale, changing link authorization to none, enabling link authorization for only admins, disabling SMB, disabling SMB1/2, disabling HTTP/2, enabling HTTP/2, HTTP Compression on and off.

Any tips?


EULA: Synology is officially dead :( EULA: Synology is officially dead :(
DSM

Here we go: forced arbitration, class-action ban in the EULA. Not sure when this slipped in but latest fw update wants me to sign this. :( Too bad. I used to like this NAS. It was feature rich and did not abuse me... until now. It started with the hard drive vendor-locking and now this... also the EULA can not be searched with ctrl+f as it is overridden and it can not be highlighted to be searched in in another text editor. Of course one can get around it with devekloper tools but that is besides the point that this is all scumbag behaviour.

UPDATE: To the helpful people repeatedly pointing out that this clause has been in there since 2023... I now know and as I originally said I did not care to know when this slipped in... it wants me to sign it now and I happened to notice it now. But thx.

UPDATE2: Active Backup is not found in DSM 7.3 package manager and 3.0 is incompatible for some reason but the package exists: https://archive.synology.com/download/Package/ActiveBackup/3.1.0-24948
Manual install worked for me.

https://preview.redd.it/eula-synology-is-officially-dead-v0-uvzyd80zwvtf1.png


Upgraded DS1019+ to DSM7.3-81180 - Plex HW encoding is working Upgraded DS1019+ to DSM7.3-81180 - Plex HW encoding is working
DSM

I manually downloaded the DSM7.3 from https://www.synology.com/en-uk/support/download/DS1019+?version=7.3#system and wanted to verify if HW-encoding support is still working as the release notes state this:

Introduced in DSM 7.2.2 and continued in DSM 7.3, media file processing for HEVC (H.265), AVC (H.264), and VC-1 codecs is now handled by end devices to improve system efficiency. These codecs are widely supported on end devices such as smartphones, tablets, computers, and smart TVs. If the end device does not support the required codecs, the use of media files may be limited.

The Upgrade marked PHP7.3 as no longer supported and TBH, I have no idea why this was installed and I just removed it.

The upgrade worked fine and I had to restart after the upgrade so that the patches from https://github.com/007revad/Synology_HDD_db worked. After the 2nd reboot the Container Manager was in an "invalid" state and needed to be repaired. The repair wiped all the installed containers, volumes and networks and upgraded to Container Manager 24.0.2.1606.

I do not use any of the Synology packages, as I have everything in Docker. All of this works fine without a problem.



Help me understand, dual port 10Gbe Network Switch Help me understand, dual port 10Gbe Network Switch
Networking & security

I’ll be short and sweet

Primary network 192.168.1.0/24 Storage network 10.10.10.0/24 (SFP+ DAC Connected)

I’ve got a dual port 10Gbe SFP+ network Nic installed in my Synology, no issues the Synology can see the Nic, I can assign an IP, 10.10.10.3, on the Synology and 10.10.10.4 on my Proxmox host, Proxmox can ping the Synology etc.

Note this is a SFP+ card and I’m directly connected (DAC), no switch, works perfectly

Now the issue is if I place my second Proxmox host (DAC connected to the storage network) in a HA cluster, primary network 192.168.1.0/24, the cluster works perfectly fine however the storage on the 10.10.10.x network is not visible to the second Proxmox host , the second Proxmox in the cluster can’t ping the storage like the first host.

I think I’m missing something fundamental


Migrating to new 25 model on DSM7.3 with migration assistant Migrating to new 25 model on DSM7.3 with migration assistant
DSM

I just jumped the gun on a new DS925+ to replace my old 415+ (I have thought long about migrating to TrueNAS Scale, but decided against it when synology rolled back their HDD restriction policy).

DSM 7.3 still seems to think that foreign drives are a bit unhealthy (I use Seagate IronWolfs Pro). I setup the new NAS, it showed healthy status on my new SHR storage pool and volume1. Starting the migration assistant stopped it told me, I should go to the storage manager to check the abnormal storage status on my new NAS.

I was close to jumping to migration via HyperBackup, but then I rememberd that synology probably changed their compatability decision on very short notice for the developers, resulting in the system showing healthy status, but internally it would still show not supported somewhere.

Not happy with this, I turned to the script to add my drives to the compatability list (GitHub - 007revad/Synology_HDD_db: Add your HDD, SSD and NVMe drives to your Synology's compatible drive database and a lot more) and yes, after running it once, rebooting, the migration assistant accepted my stoarge configuration.

It is running now.

So a little warning to all that think of going back to synology, you may be in for a surprise or two. I logged a ticket with synology for this as well, as I truely believe this to be an error and not intentional.



Cozy tile-placing perfection awaits! Easy to learn, endlessly satisfying to master. Join our community challenges, play the demo NOW, and wishlist to gain rewards. Releasing 20th November on Steam. Your next obsession starts here ✨


Outgrowing my DS220+ in CPU/RAM demands - Upgrade paths Outgrowing my DS220+ in CPU/RAM demands - Upgrade paths
Solved

Hi, i have a DS220+ with a single 10TB drive in it and a manual backup strategy to cloud.

I have under 5TB used, so i am nowhere near filling it up, unlike a similarly named thread.

I can add another drive, but i would like to park that discussion.

My main issue is: i use the DS220+ as our home private cloud, and i keep adding more and more k8s containers to it. Already with 5 containers, they occasionally fail to start even restart. I don't have the RAM added, but my thinking is, the CPU seems too underpowered as it is.

What would be your recommendation for a more powerful unit in terms of CPU/RAM (hosting containers)?

My storage needs definitely do not require more than 2 bays, even one would be fine.

I do like the stability of the DS220+ so far and it works really well with the UPS i dedicated to it and my home networking devices. So, preferably sticking to Synology.

Edit: I am using a lot of the DSM software, so i am reluctant to move to an additional mini PC like suggested. I would rather get a more powerful DSM that is more compute than storage oriented. Not ruling out mini PCs, but making it a last resort really.

Edit 2: Here's some CPU profile, showing what happens when my backup solution runs and simply lists all the files to detect if their last changed date has changed:
https://imgur.com/a/4lDWdGx

I wonder what are those 80% CPU usage by I/O Wait

Edit 3: Just installed extra 8GB RAM and even the UI seems faster. Thanks to the fellow redditters that suggested extra RAM and provided realistic expectations.

This post can be closed



First time NAS buyer need help with my decision. First time NAS buyer need help with my decision.
NAS hardware

Hi all,

I am looking at purchasing my first NAS. I will be using the NAS mainly for dumb storage and I would like the ability to spin up VM’s using iSCSI targets. I may also want to run a plex server but I will be doing the transcoding client side.

I’m looking at getting a DS923+ since they are available locally for a decent price and have the ability to expand to 10gbe. I’m staying away from the 925+ due to the anti-consumer drive lockdown.

I’ve done a bit of research and I find the Synology OS and app support appealing. But the old hardware is a turn off.

I’m not sure if I feel completely secure going with UGREEN because they are so new and I am a big advocate for data privacy. The NAS will run on its own VLAN down the road but not initially.

Am I better off getting a DS923+ with better app support, a defined security patch roadmap and sacrificing a bit of performance over the UGREEN NAS? Is the DS923+ sufficient for heavy VM usage in a home lab? I don’t really want to go the DIY route.

I also saw that I could get the UGREEN and flash a custom OS like TrueNas but then wouldn’t I lose all the app support which is the main appeal of buying a prebuilt?

What would you guys do in my situation?



GUIDE Real-debrid plex integration using rdtclient, cli_debrid, zurg and rclone on Synology GUIDE Real-debrid plex integration using rdtclient, cli_debrid, zurg and rclone on Synology
Tutorial

This guide is for someone who would like to get real-debrid working with Plex on Synology or Linux, which I would like to share to the community. Please note that it's for educational purpose only.

What is Real-Debrid and why use it

A debrid is a service that converts your torrent url into http/webdav downloadable file. Not only you can download at max speed, more importantly, you are not uploading or seeding the torrents, so no legal issues and it's private (no one knows you downloaded the file). Hence it's actually the safest way to handle a torrent download.

Among all the debrid services, real debrid (RD) is the biggest with almost all popular torrents cached, download is instant. The limits are also very generous, 2TB download per 24 hours and unlimited torrents, and it's also cheap. $16 EUR for 6 months. If you are looking for alternatives, the order I recommend is below, but most tools integrate with real-debrid.

real-debrid > torbox > Premiumize

I already have *arr setup to retrieve contents from usenet, however there are some rare contents that are not available on usenet, such as Asian contents, which why need to explore the torrents territory.

You may say, I can torrent for free, why need to pay for debrid? well it's not actually free if you value privacy. You would need to pay for a VPN service, on top of that port forwarding, currently only about top 4 VPN providers offer port forwarding, PIA, ProtonVPN, AirVPN and Windscribe, among them, PIA is the cheapest if you pay upfront for 3 years, which is about $2 + $2 port forwarding, which comes down to $4/month, 6 months is $24. and you have deal with slow downloads, stalled downloads, hit and runs, and for private trackers you have to deal with long seeding time/ratio upto 14 days, and since you use static IP with port forwarding, there is always a tiny chance that your privacy is not guaranteed.

with real-debrid, submit url and instantly download at max speed the next second, and privacy saved.

ok. enough with intro to real-debrid, without further ado, let's get started.

There are two ways to integrate real-debrid with Plex:

  1. Use rdtclient to simulate qbittorrent so *arr can instantly grab files

  2. cloud plex with unlimited cloud storage

Before you start, you would need a real-debrid account and your API key.

https://real-debrid.com/

Method 1: rdtclient as debrid bridge to *arr

There are two app to bridge debrid to *arr, rdtclient and Decypharr, I choose rdtclient for easy to use.

https://github.com/rogerfar/rdt-client/blob/main/README-DOCKER.md

Copy and save the docker-compose.yml with your own paths for docker config and media, your own PUID and PGID. e.g.

---
version: '3.3'
services:
  rdtclient:
    image: rogerfar/rdtclient
    container_name: rdtclient
    environment:
      - PUID=1028
      - PGID=101
      - TZ=America/New_York
    volumes:
      - /volume2/path to/config/rdtclient:/data/db
      - /volume1/path to/media:/media
    logging:
       driver: json-file
       options:
          max-size: 10m
    ports:
      - 6500:6500
    restart: unless-stopped

I use Synology, I put config on my NVME volume2 and media I point to HDD volume1. After done run the container.

docker-compose up -d;docker logs -f rdtclient

If all good press ctrl-c to quit, open browser to internal IP http://192.168.x.x:6500

Create an account and remember the username and password, which you will use to input into *arr settings. then enter Real debrid API key.

Go to settings. on General tab under banned trackers, fill in any private tracker keywords you have.

On Download Client tab, use Internal Downloader, set Download path and mapped path the same (in my case), i.e. both /media/downloads

On qBittorrent/*rr tab,
Post Torrent Download Action: Download all files to host.
Post Download Action: Remove torrent from client
Only download available files on debrid provider (so it will download even if it's not cached)
Delete download when in error: 5
Torrent maximum lifetime: 7200

Keep rest the same for now and save settings.

For Radarr/Sonarr, I recommend Prowlarr for simple indexer management. Go to each private tracker and manually use qbittorrent as download client. We don't want rdtclient accidentally get picked up by a private tracker indexer and get your account banned.

In Radarr/Sonarr, add a qbittorrent client and name it rdtclient. use interal IP and port 6500, for username and password use the rdtclient login you just created. Set Client Priority to 2, Test and Save.

The reason we put priority to 2 is although it's blinking fast, you can easily eat up 2TB in few hours if you have good connection. Let usenet be first since it's unlimited, and put your old qbittorrent if you have to priority 3.

Now pick a random item in *arr and run interactive search, choose a bittorrent link and it should instantly be download and imported. You can go back to rdtclient to see the progress. In *arr, the progress bar may be incorrect and shown as half way but the file is actually done.

There is an option to mount RD as webdav with rclone for rdtclient, but I rdtclient is already download at maximum speed so rclone is not needed.

Method 2: Cloud Plex with Unlimited Storage

Is it possible? Yes! Cloud plex and real-debrid are back! with vengeance. No longer you need to pay hundreds to Google, just $3/m to RD and have max speed enough for few 4k streams.

This is a whole new beast/stack, completely bypass *arr stack. I suggest you create seperate libraries in Plex, which I will cover later.

First of all, I would like to give credit to hernandito from unraid forum for the guide on unraid: https://forums.unraid.net/topic/188373-guide-setup-real-debrid-for-plex-using-cli_debrid-rclone-and-zurg/

Create media share

First you need to decide where to put the RD mount. it has to be somewhere visible to plex. I mount my /volume1/nas/media to /media in containers, so I created the folder /volume1/nas/media/zurg

zurg

What is zurg and why we need it?

zurg mount your RD as webdav share using rclone, and create virtual folders for different media, such as movies, shows, etc, making it easy for plex to import. It also unrar files and if RD delete any file from cache, zurg will detect and re-request so your files are always there. Without zurg, all files are jammed on the root folder of RD which making it impossible for plex to import properly. This is why even rclone alone can mount PD webdav share, you still need zurg for plex and ease of maintenance.

to install zurg, git clone the free vesion (called zurg-testing).

git clone https://github.com/debridmediamanager/zurg-testing.git

go to the directory and open config.yml, add the RD token to token on line 2. save and exit.

go to scripts folder and open plex_update.sh, add plex_url and token and zurg_mount (in container), save the exit.

go one level up and edit docker-compose.yml. change the mounts, i.e.

version: '3.8'

services:
  zurg:
    image: ghcr.io/debridmediamanager/zurg-testing:latest
    container_name: zurg
    restart: unless-stopped
    ports:
      - 9999:9999
    volumes:
      - ./scripts/plex_update.sh:/app/plex_update.sh
      - ./config.yml:/app/config.yml
      - zurgdata:/app/data
      - /volume1/nas/media:/media

  rclone:
    image: rclone/rclone:latest
    container_name: rclone
    restart: unless-stopped
    environment:
      TZ: America/New_York
#      PUID: 1028
#      PGID: 101
    volumes:
      - /volume1/nas/media/zurg:/data:rshared # CHANGE /mnt/zurg WITH YOUR PREFERRED MOUNT PATH
      - ./rclone.conf:/config/rclone/rclone.conf
    cap_add:
      - SYS_ADMIN
    security_opt:
      - apparmor:unconfined
    devices:
      - /dev/fuse:/dev/fuse:rwm
    depends_on:
      - zurg
    command: "mount zurg: /data --allow-other --allow-non-empty --dir-cache-time 10s --vfs-cache-mode full"

volumes:
  zurgdata:

save. If you are using synology, you would need to enable shared mount so rclone container can expose its mount to host, otherwise it will error out.

mount --make-shared /volume1

afterwards fire it up

docker-compose up -d;docker logs -f zurg

If all good, ctrl-c and go to /your/path/zurg you should see some folders there.

__all__  movies  music  shows  __unplayable__  version.txt

If you don't see them, zurg didn't start correctly. Double check your RD token and mounts.

You may also go to http://192.168.x.x:9999 and should see the status.

Create a folder for anime if you like by updating the config.yml. i.e.

zurg: v1
token: <token>
# host: "[::]"
# port: 9999
# username:
# password:
# proxy:
# concurrent_workers: 20
check_for_changes_every_secs: 10
# repair_every_mins: 60
# ignore_renames: false
# retain_rd_torrent_name: false
# retain_folder_name_extension: false
enable_repair: true
auto_delete_rar_torrents: true
# api_timeout_secs: 15
# download_timeout_secs: 10
# enable_download_mount: false
# rate_limit_sleep_secs: 6
# retries_until_failed: 2
# network_buffer_size: 4194304 # 4MB
# serve_from_rclone: false
# verify_download_link: false
# force_ipv6: false
on_library_update: sh plex_update.sh "$@"
#on_library_update: sh cli_update.sh "$@"
#for windows comment the line above and uncomment the line below:
#on_library_update: '& powershell -ExecutionPolicy Bypass -File .\plex_update.ps1 --% "$args"'

directories:
  anime:
    group_order: 10
    group: media
    filters:
      - regex: /\b[a-fA-F0-9]{8}\b/
      - any_file_inside_regex: /\b[a-fA-F0-9]{8}\b/

  shows:
    group_order: 20
    group: media
    filters:
      - has_episodes: true

  movies:
    group_order: 30
    group: media
    only_show_the_biggest_file: true
    filters:
      - regex: /.*/

  music:
    group_order: 5
    group: media
    filters:
      - is_music: true

save and reload.

docker-compose restart

Plex

Before we start, we need to disable all media scanning, because scanning large cloud media will eats up 2TB limit in few hours.

Go to settings > Library, enable partial and auto scan, set Scan my library periodically to disable, set never for all: generate video preview, intro, credits, ad, voice, chapter thumbnails, and loudness. I know you can set in each library but I found plex sometime ignore setting in library and scan anyways.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-40nkh1rr46sf1.png

To be able to see the new rclone mounts, you would need to restart plex.

docker restart plex

Create a library for movies, name it Movies-Cloud, point to /your/path/to/zurg/movies, disable all scanning, save. Repeat the same for Shows-Cloud, Anime-Cloud and Music-Cloud. All folders are currently empty.

Overseerr

You should have a separate instance of overseer dedicated to cloud because they have different libraries and media retrieval method.

Create a new overseerr instance say overseerr2, connect to plex and choose only cloud libraries, and no sonarr or radarr. set auto approve for users and email notification if you have. The requests will be sent to cli_debrid and once file is there, overseerr will detect and show as available and optionally send email and newsletter.

cli_debrid

Follow the instruction on https://github.com/godver3/cli_debrid to download the docker-compose.yml

cd ${HOME}/cli_debrid
curl -O https://raw.githubusercontent.com/godver3/cli_debrid/main/docker-compose.yml

You need to precreate some folders.

mkdir db_content config logs autobase_storage_v4

edit docker-compose.yml and update the mounts. i.e.

services:
  cli_debrid:
    image: godver3/cli_debrid:main
    pull_policy: always
    container_name: cli_debrid
    ports:
      - "5002:5000"
      - "5003:5001"
      - "8888:8888"
    volumes:
      - /volume2/nas2/config/cli_debrid/db_content:/user/db_content
      - /volume2/nas2/config/cli_debrid/config:/user/config
      - /volume2/nas2/config/cli_debrid/logs:/user/logs
      - /volume1/nas/media:/media
      - /volume2/nas2/config/cli_debrid/autobase_storage_v4:/app/phalanx_db_hyperswarm/autobase_storage_v4
    environment:
      - TZ=America/New_York
      - PUID=1028
      - PGID=101
    restart: unless-stopped
    tty: true
    stdin_open: true

Since I run this in Synology, port 5000 and 5001 are reserved so I have to change the numbers to 5002 and 5003. save and start the container.

Open http://192.168.x.x:5000 (or http://192.168.x.x:5002 on Synology)

Login and start the onboarding process.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-utxgw8tz56sf1.png

Set admin username and password. Next.

Tip: click on "Want my advice" for help

For File Collection Management, keep Plex. Sign into plex. Choose your server, and cloud libraries.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-3pgwafh156sf1.png

Click Finish.

Update Original Files Path to yours. i.e. /media/zurg/__all__

Add your RD key and Trakt Client ID and Secret. Save and authorize trakt.

For scraper, add torrentio and nyaa with no options. torrentio for regular stuff and nyaa for anime.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-aqs6ug5356sf1.png

For versions, I choose middle, keep both 4k and 1080 versions of same media.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-shwzaxj456sf1.png

Next.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-93g1v79x56sf1.png

For content source, I recommend go easy especially in the beginning, so you don't ended up queuing hundreds and reach your 2TB in few hours and need to clean up. We will add more later.

I recommend choose Overseer for now. Overseerr will also take care of user watchlists etc.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-tbb5mg1756sf1.png

For overseerr, select allow specials, add overseerr api key. enter overseerr url and click add. Remember use the second instance of overseer.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-g9elo6iv56sf1.png

Choose I have an Existing plex library (Direct mount), next and scan plex library.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-8ah91a0a56sf1.png

and done.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-g70wzgrt56sf1.png

Click go to dashboard. then system, settings, go to additional settings

In UI settings, make sure Auto run program is enable. Add your TMDB key.

For queue, I prefer Movies first soft order, also sort by release date desc.

For subtitle settings, add your opensubtitle account if you have pro account.

For Advanced Tab, change loggin level to INFO, enable allow partial overseerr requests, enable granular version addition and enable unmatched items check.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-9dsvklqc56sf1.png

Save settings.

Now to test, go to overseerr and request an item. cli_debrid should pick it up and download it, you should soon get an email from overseer if you setup email, and the item will appear in plex. You can click on the rate limits at the middle of screen to see your limits, also the home screen.

What just happened

When a user submit a request in overseerr, cli_debrid pick it up and launch torrentio and nyaa to scrape torrent sources and send torrent/magnet url to real-debrid, blacklist any non-working or non-cached, real debrid save the file (reference) to your account in __all_ folder, zurg analyzes the file and reference it in the correct virtual media folder, it's webdav protocol so it appears as a real file (not a symlink) so Plex pick it up and overseer mark it as available and send you email.

We purposely point cli_debrid to __all__ instead of zurg folders because we want zurg to manage, if cli_debrid to manage, it will create symlinks which is not compatible with plex.

Also make sure plex start after zurg otherwise the mount may not work, one way to fix is to embed plex in the same docker-compose.yml and add depends_on clause for rclone.

Adjust backup

If you backup your media, make sure to exclude zurg folder from backing up or it will again eats up 2TB in few hours.

You may also backup your collection on RD with a tool such as https://debridmediamanager.com/ (DMM) which you can also self-hosted if you like. Connect to RD and click the backup button to get the json file, which you can import to other debrid services using the same DMM to repopulate your collection.

Remember cloud storage doesn't belong to you. If you cancel or get banned, you will lost access. You may still want to have a media library on your NAS but only store your favorites.

More Content Sources

Because RD is so fast it's easy to eats up 2TB daily limit, even plex scanning files take lots of data. I would suggest wait for one day or half day and check the queue and speed and rate limit before adding more sources.

If you accidentally added too many, go to cli_debrid System > Databases, sort by state and remove all the wanted items. Click first wanted item, scroll down and shift click last wanted item, delete.

I find special trakt lists are ok but sometime have old stuff. For contents, I like kometa lists and other sources which you can add, remember to add a limit to the list, like 50 or 100, and/or set cutoff date, such as release date greater than this date (YYYY-MM-DD format) or within the last X days. I only interested in anything within the last 5 years so I use 1825 (days).

https://trakt.tv/users/k0meta/lists
https://trakt.tv/discover
https://trakt.tv/users/hdlists/lists

Tip: the easiest way is to like all the lists you want and then click the import liked lists to this source for the trakt content source.

Alternatively, just do it from overseerr, so you only get the items you are interested in.

Add a Finishing touch: Kometa

kometa will create collections for plex so it looks more fancy. Create a kometa docker.

https://github.com/Kometa-Team/Kometa

For config.yml libraries configuration, I recommend the below.

libraries:                           # This is called out once within the config.yml file
  Movies-Cloud:                         # These are names of libraries in your Plex
    collection_files:
    - default: tmdb
      template_variables:
        sync_mode: sync
    - default: streaming
  Shows-Cloud:
    collection_files:
    - default: tmdb
      template_variables:
        sync_mode: sync
    - default: streaming
  Anime-Cloud:
    collection_files:
      - default: basic               # This is a file within Kometa's defaults folder
      - default: anilist             # This is a file within Kometa's defaults folder

After running, go to collection tab of each library, click on three dots and choose Visible on and select all

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-3nvgt2tr56sf1.png

Do it for all TMDB and network collections just created.

Afterwards, go to settings > Manage > Libraries. Hover to the library and click on Manage Recommendations. move TMDB to top.

https://preview.redd.it/guide-real-debrid-plex-integration-using-rdtclient-cli-v0-8ldvstc7i6sf1.png

Do it for all libraries.

Now go to home page and check. If your libraries are not showing, Click on More, then pin your libraries.



Synology Drive on Mac and "Unsync" feature - is this expected behavior? I might have lost some data here. Synology Drive on Mac and "Unsync" feature - is this expected behavior? I might have lost some data here.
NAS Apps

A few days ago I reorganized a large collection of footage on my M1 Macbook. There were lots of folders and inside each folder was an Original and Proxy folder. While I was reorganizing, I watched my server file structure update accordingly, marking each local folder as synced as I went.

After that I went through and right clicked each "Originals" folder and selected "Stop Syncing This folder" after which I deleted the local footage, leaving only the proxy files. Again I watched the server to make sure the originals remained there. They did.

It's a coupe days later and I look on the server to see that neither the originals or proxy files are there. And locally, all my empty originals folders appear to be syncing still as if my "Stop Syncing" command didn't stick.

Synology Drive Client appeared to be hung up. I restarted the process and it started moving my local proxy files over to the server but doesn't seem to remember or recognize any of the original footage.

I *THINK* most or all of my original footage is in the recycle bin. I'll be going through and meticulously restoring it. I didn't have versioning turned on but you better believe I will going forward.

My question here is, what happened? How do I avoid this? Is Drive just not a reliable program to keep large amounts of data in sync with my server? I thought i was being careful and thorough so its especially frustrating to be in this situation now.

ChatGPT said the following:

  1. **Drive Client doesn’t distinguish “unsync” vs “delete” well.**When you “unsynced” those Original folders and then deleted the local copies, Drive probably interpreted that as you wanting those deletions pushed to the server as well. Even if you thought you had paused sync or told it to stop syncing those folders, the client likely still had the link in place and mirrored your deletions upstream.

Does this ring true? If so then I've really lost a lot of faith in the software.

Thank you for reading this far and any insight you might have 🙏

DS1819+
DSM 7.2.1 69057 Update 8 (odd because I thought I was up to date a couple days ago)
Drive Client: 3.5.0-16084
I can't seem to find my Drive Admin version number....
MBP M1, Sequoia 15.6.1


Diskstation volume warning Diskstation volume warning
NAS hardware

To try to make a very long story short, I got a warning on my volume even though my disks reported that they were healthy. They weren't on the compatibility list, so I replaced all three with larger "compatible" drives. After many days of trying to resolve the warning, I finally decided to make a full backup with Hyper Backup and rebuild the whole volume. After; 24 hours, backing up gets into a state where it has copied all the data, but is doing nothing, says that it's 33% done and won't finish. Suspending the backup causes it to fail and must start over from the beginning. There is no information anywhere about what it's doing or what the errors are. Nothing. I've used multiple diskstations for years, but I'm getting fed up.


Someone finally cracked HW transcoding on x25 series Someone finally cracked HW transcoding on x25 series
DSM

I started my self-hosted journey this year and went with Synology 225+. But guess what, it took me a while to understand that it is a downgrade from 224+. Even the transcoding didn't work. Someone finally built an UNOFFICIAL script to re-enable it. Works like a charm on DS225+ DSM 7.2.2-72806-4 (jellyfin).

https://github.com/007revad/Transcode_for_x25

Just ensure you run it again on reboot (maybe via the task scheduler).

Note: While it didn't cause any issues on my device, it is unofficial. So, please proceed with caution. I take no responsibility for any damage.

Edit: as mentioned here https://www.reddit.com/r/synology/s/qq77STVgeB, the script downloads 3rd party files. it would be wise to download and keep the files in your own/forked repo to ensure no files are swapped without your knowledge.


Iphone to NAS - not all files transfered + files not supported. Iphone to NAS - not all files transfered + files not supported.
NAS Apps

Complete Novice,

I Have a DS418.

Transferring photos/videos from Iphone to Folders

Issue 1: Not all files transferred the first time (lucky i noticed before deleting from my phone)...

---- any thoughts on why?

Issue 2: Iphone (quicktime videos) are saved to Synology as .MOV or .MP4 with a significantly reduced file size. These can't be played, instead they automatically download and then are unsupported by Windows.

----- Thoughts?


Two teams. Two journeys. One all-access pass. Go beyond the game with Behind the Fern. Now streaming FREE on NZR+





User with full permissions to all shares unable to see the folders within one out of the five shares that are set up on the Synology. Problem only occuring on Windows 11 workstation. Works fine on existing Windows 10 workstation. User with full permissions to all shares unable to see the folders within one out of the five shares that are set up on the Synology. Problem only occuring on Windows 11 workstation. Works fine on existing Windows 10 workstation.
Networking & security

Migrating a client to a brand new Windows 11 computer to replace his aging Win10 workstation. What makes this problem odd is this user does not have this problem on the Windows 10workstation but on the new Windows 11 workstation I am setting up for him this one share on the Synology file server will not display any folders within it. Its just blank. But weirdly the Win11 machine has no issue seeing the sub folders in any of the other 4 shares. Machine is only running the built in Windows security, so not third party AV or security software is onboard. He has full admin rights on all the shares so its not a permissions issue especially since it works fine on Windows 10 Pro wokstation and I am logging into the Win11 workstation with the same domain user account as he is on his current Windows 10 workstation. I am at my wits end and cannot figure out whats going it especially since the issue does not occur with the other shares the Synology. Wonder if anyone has can shed any light on this.

Thanks in advance

Just to add a few things:
First it is still running DSM 6.2.4. The share in question called Users contains each users personal documents folder that they have access to. They all only have rights to their own folder except him since he is the boss. He has full administrator permissions to all the folders in the User share including his own. And a very important correction and point I forgot to mention. Initially when I went to the share the for first time I was able to access all the folders. I even changed his local documents location to point to his folder on the share. And strangely his folder is still viewable and accessible. But every single other user folder no longer shows up in share. I even tried typing the UNC path to any of the other user folders but it gives me that typical network error "Windows cannot access \\fileserver\Users\<example> even though he has the same full rights to <example>" as he has to own user folder. It would be less strange if all the folders showed up but were not able to be opened. But the fact that the share opens fine and his own user folder is still accessible but none of the other user folders show inside is what makes it more odd. Again all of this works fine on his existing Windows 10 machine.






Still using Icedrive? Get 87% off Internxt, an audited, open-source cloud storage with post-quantum zero-knowledge encryption that actually works



412+ Update into DSM 7. Finally made it !!!! 412+ Update into DSM 7. Finally made it !!!!
DSM

Since the DS412+, just like the DS713+ or RS814+, uses the Cedarview D2700 Atom processor, it can still run DSM 7.1.
Here’s some information on how to do it.

#Prerequisites for installation

+ Download DSM from Synology Download Center for rs814+ ( https://global.download.synology.com/download/DSM/release/7.0.1/42218/DSM_RS814%2B_42218.pat )
+ 1x NAS DS412+ with DSM-6.2.4
+ 1x HDD/SSD ( SSD ist just faster )

 

#Upgrade DSM-6.2.4 to DSM-7.1.1 Update1

  1. Connect one HDD/SSD to DS412+ NAS with DSM-6.2.4

  2. Boot DS412+ NAS mit DSM-6.2.4 and access it via browser <NAS-IP>

  3. At Welcome page just click "Setup"

  4. Click "Manual installation"

  5. Selected the downloaded DSM for rs814+ ( https://global.download.synology.com/download/DSM/release/7.0.1/42218/DSM_RS814%2B_42218.pat ) and click "Install now"

  6. Check that data will be erased on the HDD/SDD

  7. Wait until it failes and you will get a message that you need to upload the DSM for 412+ ( this is normal )

  8. Open Putty and telnet to the <NAS-IP> on port 23

  9. Diskstation login: root / Passwort: 101-0101

  10. Edit the following files: (copy 1 of the file path lines and paste it with right click on putty, look for the lines to edit, press "a" for edit mode, edit, press ESC, write ":wq", press ENTER. Then do the same with the next file.

    files: vi /etc.defaults/synoinfo.conf vi /etc.defaults/synoinfo.conf

what to edit?

FROM:

unique="synology_cedarview_412+"
upnpmodelname="DS412+"

to:

unique="synology_cedarview_rs814+"
upnpmodelname="rs814+"

*******************************************************

then edit both following files:

vi /etc.defaults/VERSION

vi /etc/VERSION

with these text (press "a" to edit, edit, then ESC, then write ":wq" and ENTER)

majorversion="6"
minorversion="2"
productversion="6.2.3"
buildphase="GM"
buildnumber="25426"
smallfixnumber="0"
builddate="2020/07/01"
buildtime="06:24:39"

************************************************************

then edit both following files:

vi /etc.defaults/synoinfo.conf
vi /etc/synoinfo.conf

with these text (press "a" to edit, edit, then ESC, then write ":wq" and ENTER)

unique="synology_cedarview_rs814+"
esataportcfg="0x40"
internalportcfg="0xf"
usbportcfg="0x70000"

Then write reboot and ENTER

then wait 20 seconds for the reboot

then enter the NAS IP in your browser. You should see that the systems tells you that you have an rs814+

The system will ask you now to "Migrate" to rs814+, just hit "Migrate"

Select "New installation"
Select "Manual installation"
Select the DSM.pat for rs814+ ( https://global.download.synology.com/download/DSM/release/7.0.1/42218/DSM_RS814%2B_42218.pat ) and click "Install now"
DiskStation will install the new DSM and after install you will see 'Welcome to DSM 7.1' ( just complete the setup as you want )

Finally you will be able to update the NAS with the latest updates.

*******************************************************

My procedure is a mix between but when i tried them individually, they didn't work.

https://xpenology.com/forum/topic/70635-dsm-711-on-synology-ds412-how-to-install-and-update-the-dsm-624-to-dsm-711-42962-update-6/

https://tweakers.net/downloads/61774/synology-dsm-711-build-42962.html


how to setup temperature alarm? how to setup temperature alarm?
NAS hardware

hello,

i have a need to be notified (via email) if the temperature of synology nas (or drives inside) get above a specific manually set value. like 30 degrees.

is such a thing possible? i have 3 synology in a room with AC and normal temperature inside is 20-22 degrees. i'd like to be notified if the temp of the nas or disks get above 30 degrees , what would indicate something happened with AC.



C2 Password issues with Paypal C2 Password issues with Paypal
Networking & security

I hope this is the right subreddit, if not I apologize, and please orient me toward the right one!

I've been using C2 Password for some time and it used to work fine, but recently it stopped working properly on Paypal. It shows two issues:

  • When I'm on the login page, it doesn't fill the password anymore, I have to manually copy/paste it, while the email address automatically fills just fine. If I go to the C2 Password extension menu and click on "open and fill" on my paypal info, it opens the login page but again, only fills the email address.

  • When I'm creating an invoice and go fill the "object" field, it immediately shows a drop-down menu to offer a randomly generated password, which not only makes no sense, but also hides the objects I saved and that I need to use. I tried telling C2 Password to NOT do anything on the create invoice page, but it still shows.

I'm on desktop, using Opera GX. It started happening after my computer got factory reset. Does anyone know how I could fix it?



Hyperbackup question and permissions on the remote node Hyperbackup question and permissions on the remote node
NAS Apps

So here is a stupid question

My friend and I have both Synology NASes and have discussed about using each other for backing up critical data using Hyperbackup (over the internet)

Before I dig deep into the process, can someone tell me if that means I will have access to his data (the backup AND all of his NAS and he will have access to mine?

Asking because I will need write permission on his NAS and he, on mine.


Synology Apps Alternatives List Synology Apps Alternatives List
NAS Apps

Are you ready to switch? Not yet! I'm currently looking for alternatives for all Synology apps. And creating a list for the community. I'll get started.

Alternatives should be open source (free) and ideally Docker with mobile apps for iOS and Android. Webview or App for desktop.

DS Video -> Jellyfin; (Plex); Emby

DS Note -> Joplin; Osidian

DS Audio -> Audiobookshelv für Hörbücher; Navidrome; Subsonic

Photos -> Immich; (Photoprism)

DS File -> Filebrowser Webapp; Nextcloud; Seafile

DS get -> qbittorrent mit Skin; AriaNG; JDownloader

Quickconnect -> Twingate; Tailscale

Chat -> Jitsi; Rocketchat (nicht Opensource)

Synology Drive (Synchronise PC <-> NAS) -> Parachute Backup; Nextcloud, Seafile

Hyperbackup -> (Duplicati)

Container Manager -> Portainer; Dockge; Lighthouse for updates

MediaServer DLNA UPnP -> Asset

List of Unique Opensource Apps Nice to have things: Paperless-ngx; Mattermost; Cosmos-Server

List of possible Base OS Systems: TrueNas; Unraid; OpenMediaVault; FreeNAS; Rockstore; Omarchy

I will Update the list if comments come in! Comments Ideas are Welcome.


この10月は「GTAオンライン」で狂気がますます膨れ上がっていきます。ゾンビサバイバルの他、復活した人気モード「ビーストVSスラッシャー」や「審判の日」、無料のハロウィンマスクをお楽しみください。また、最初の3期間連続でウィークリーチャレンジをクリアして、バインウッド・アンデッドコレクションを手に入れましょう。


SA3400 + RX1217 Compatability? SA3400 + RX1217 Compatability?
NAS hardware

Has anyone used an RX1217 with a SA3400? Having trouble getting a clear answer if they work. I have all SAS drives in my SA3400 (and in an attached RX1222sas), but I am wondering if I can use a non-SAS RX1217 as an expansion unit without issue. Used RX1217s are about 1/5th the price of a new RX1222sas.

Anyone done this or have any info to share? What am I realistically losing by using a 1217 vs a 1222?


DSM 7.3 will ship with VMM 2.8 which includes a critical QEMU version bump. DSM 7.3 will ship with VMM 2.8 which includes a critical QEMU version bump.
NAS Apps

Anyone relying on VMM for Windows VMs has likely encountered a reboot bug where Windows never finishes booting. This was a known issue in the version of QEMU included in VMM 2.7. I've had a ticket open about the issue, and they just told me VMM 2.8 for DSM 7.3 will upgrade QEMU to a version that fixed the boot hang. I don't have any details on release date or exact version number. SA6400 is more affected by this than other models for some reason, which means the target audience is going to be quite small. But it's useful information none the less.


Can't get OpenVPN to route Can't get OpenVPN to route
Networking & security

I'm here admitting defeat - I've messed with this on and off for weeks and just can't figure it out. I had OpenVPN working once, but two things happened: I reloaded my firewall (OPNsense) and possibly forgetting a config, and I also changed my network IP schema (from 192.168.0.0/24 to /17). Weeks later I realized I broke the VPN.

I think *somehow* I previously had OpenVPN configured to use the same network/subnet as my synology/LAN, so it worked. Because I recall when I changed IP schema I went into the config and it complained that the OpenVPN network couldn't be the same as DSM's.

Anyway I can connect from outside my house to the VPN, so the firewall port forwarding, auth, etc is working. However I can't access any resources on the LAN once connected, either by IP or hostname, including DSM.

  • My current Synology is static IP 192.168.2.30/17.

    • It is set to default gateway.

  • My OpenVPN IP is set to 10.10.0.1.

    • No options are checked in the config except 'allow clients to access server's LAN'

    • I tried 172.16 but switched because I have a docker container with an unused 172.16 network bridge I can't seem to remove

  • DSM firewall is disabled

  • No static routes are defined (I have not tried adding one....)

  • 'Enable multiple gateways' in network config, I've tried checked and unchecked.

I don't see how any (OPNsense) firewall rule, routing or NAT settings would impact this - I assume when 'allow clients to access server's LAN' is enabled, DSM is doing the routing.. or NATing... not sure but I thought it just worked before.

Thanks kindly.

https://preview.redd.it/cant-get-openvpn-to-route-v0-fozsi294xssf1.png


Back up multiple external drives to Synology Back up multiple external drives to Synology
Solved

Hello, total noob here at my wits end trying to figure out how to do this. I want to manually back up about 20 different hot swap external drives to the NAS. I just want to be able to back them up whenever I do work on a particular drive and see them in Synology as their own individual drives. I can't be the only person who does this, but after hours of deep dives into forums and tutorials I cannot find this particular usage scenario.


NAS Certificate generated with "Taipel" instead of "Taipei" NAS Certificate generated with "Taipel" instead of "Taipei"
Solved

I went to log into my DS420 NAS today and Firefox warned me of a new certificate. I examined the cert, which was indeed issued today, with an expiry of a year from now, but it shows this:

Subject Name C (Country): TW L (Locality): Taipel O (Organization): Synology Inc. CN (Common Name): synology

Issuer Name C (Country): TW L (Locality): Taipel O (Organization): Synology Inc. CN (Common Name): Synology Inc. CA

I'm pretty sure Taipel isn't a place, and that Synology is actually based in Taipei. Any ideas what's going on here? I'm going to hold off logging into the device until I can figure out what's happening. Could anyone else whose cert has recently renewed itself check to see what theirs says?


I reverse-engineered Synology Photos permissions and built scripts to sync them with filesystem ACLs I reverse-engineered Synology Photos permissions and built scripts to sync them with filesystem ACLs
NAS Apps

TL;DR: Built automated scripts that align Synology Photos user permissions with actual filesystem ACLs, solving the security gap where SAMBA users can access photos they shouldn't see.

Github: https://github.com/vchatela/synology-photos-shared-permissions

Note: backup, backup and backup before running those in case any permissions issues.

The Problem

Anyone else frustrated by this Synology Photos security issue?

  • In Photos app: Users only see folders you've shared with them ✅

  • Via SAMBA/SMB: Same users can see ALL photos in /photos folder ❌

This happens because Synology Photos uses its own database for permissions, completely ignoring filesystem ACLs.

My Solution

I reverse-engineered the synofoto PostgreSQL database and built a complete automation suite:

Core Scripts:

  • export_permissions_json.sh - Extracts all permissions from Photos database to JSON

  • sync_permissions.sh - Syncs individual folder permissions to filesystem

  • batch_sync.sh - Processes all shared folders system-wide

  • permission_audit.sh - Validates everything is aligned correctly

  • nightly_sync_audit.sh - Automated scheduling with email alerts

Automation & Monitoring:

Automate it following the readme and you will have a nightly schedule, with emails on issues, and zero maintenance.

I've been running it since 60 days without any troubles.

Real-World Use Case: Immich Integration

This is a game-changer for Immich deployments:

  • Deploy Immich with specific user credentials

  • Each user's Immich instance only sees their authorized photos

  • No more worrying about users accessing others' private photos

  • Perfect alignment between Photos app and external tools

Anyone having issues or else, happy to discuss !

Valentin



Ultra PRO なら、コレクションを簡単に保護、展示、楽しむことができます。期間限定でサイト全体が 20% オフ!


Safari not opening pages behind Synology Reverse Proxy Safari not opening pages behind Synology Reverse Proxy
DSM

Hi,

this really drove me crazy so I thought I'd write something about it in case it happens to you.

I'm using lots of Docker containers on my Synology. I create reverse proxy hosts for them. Because I like to encrypt all the things, most of my containers are using a certificate that's using just their hostname. No wildcards, no SAN. I have a public domain name and I am using a DDNS service to create subdomains matching the names on the certificates.

Because I also like multi-layered security approaches, I use a profile for the reverse proxy that only allows traffic from my VPN and my internal network to reach the docker containers/reverse proxy hosts.

Using Safari, this sometimes does not work. It may have worked yesterday for container A, but not for container B. Today the story might be different. Chrome never has an issue and always shows the webpages of my containers. Disabling the profile for VPN and internal network, allowing all traffic, fixes the issue - but that's just a workaround, not a solution.

Hours and hours of debugging, reinstalling, scratching my head and having a beer just in case didn't help, until I noticed the "Hide my IP from trackers" checkbox in Safari. This actually makes Safari use Apple's Private Relay feature, causing traffic to leave my internal network, going somewhere in Apple-Land and returning to my router using my public IP address - which is not allowed to pass the reverse proxy.

TL,DR: Disabling "Hide my ip address from trackers" may fix strange reverse proxy issues with Safari.

(Yes, this could also be a posting in r/Safari or some place like that.)

Hope this helps,

Stefan



Used Drive Sharesync to sync a photo folder on 2 NAS's. File Station shows they are synced. However, when using Windows File Explorer to compare getting different results. Used Drive Sharesync to sync a photo folder on 2 NAS's. File Station shows they are synced. However, when using Windows File Explorer to compare getting different results.
NAS Apps

I have 2 NAS devices which are remote from each other. Both are on DSM 7.2.1. I recently used Drive Sharesync to sync a photos folder between the 2 NAS's. The folder has over 12,300 files (33 folders and 330GB) so the process was slow. When complete, using File Station I confirmed that both NAS devices have the exact same number of files, folders and bytes. However, when I compare the results to what I see on my Windows laptop, using File Explorer, which is connected to one NAS, I am seeing different information.

File Station detail: 355,332,131,613 Bytes, 12,305 Files, 33 Folders

File Explorer detail: 355,359,219,712 Bytes, 12,348 Files, 33 Folders

Reading up on possible issues, I saw that hidden files could be an issue. I thought that perhaps File Explorer (which has more items) might have some hidden files that were not being reflected in File Station. I first tried to find the option in File Station to 'Show Hidden Files' and determine if it was switched on or off, but I could not find that option anywhere... despite many suggestions in my online searches! So, going to File Explorer, I turned off the show hidden items (View -> Show-> Hidden Items) and reran the Properties. The results did not change which I assume means that there are no hidden items.

Any thoughts on why I seem to be missing 43 files between the File Station view versus the File Explorer view? Interestingly, I just tried to compare another folder on my NAS, which I did not sync, using both File Station and File Explorer and again I have a difference, as follows:

File Station detail: 65,198,564,107 Bytes, 182,867 Files, 124,106 Folders

File Explorer detail: 65,582,997,504 Bytes, 182,904 Files, 124,107 Folders

I would like to understand why there are differences so that I can make sure I am not missing data.





After DSM 7.3. 25+ series models supports third-party HDD. After DSM 7.3. 25+ series models supports third-party HDD.
NAS hardware

https://kb.synology.com/en-us/DSM/tutorial/Drive_compatibility_policies

At the same time, with the introduction of DSM 7.3, 2025 DiskStation Plus series models offer more flexibility for installing third-party HDDs and 2.5" SATA SSDs when creating storage pools. While Synology recommends using drives from the compatibility list for optimal performance and reliability, users retain the flexibility to install other drives at their own discretion.

  • For Plus series only. The FS, HD, SA, UC, XS+, XS, DP, RS Plus and DVA/NVR series remain restricted.

  • M.2 SSD cache remains restricted to the compatibility list.


SpaceRex (youtube Synology personality) received a press release that Synology is reversing its hard drive restriction policy SpaceRex (youtube Synology personality) received a press release that Synology is reversing its hard drive restriction policy
NAS hardware

SpaceRex (youtube Synology personality) received a press release that Synology is reversing its hard drive restriction policy.

I'm really glad about this.

https://youtu.be/dltc_PLvopI?si=yJ1gRCjDEjk3Vv7b

Update requires 7.3 DSM


First-person sci-fi puzzler Causal Loop on Steam. Wishlist to get notified when it drops.
media poster



I'm trying to block access to a specific Synology app from being accessible outside of our building. I can't figure out why it isn't blocking access. I'm trying to block access to a specific Synology app from being accessible outside of our building. I can't figure out why it isn't blocking access.
Networking & security

On the Synology server, I open the users' setting, go to Applications and select "By IP" and add the IP for our staff Lan and WIFI to the list. The Blocked tab is empty, but I don't know what to put in there to block everything else... I thought it would just see there is an allow list and only go off that and block everything else, but it does not.

If I just check the 'deny' box, I cannot also check the 'By IP' box. This is for domain users, and I have tried settings for both the user and the users group.

Basically, I need to block a specific user from being able to log into the chat app from outside the building and can't figure out what I am doing wrong, any help would be greatly appreciated. I am running a DS3018xs if that matters.


DSM 7.3 will be the last version for the following devices DSM 7.3 will be the last version for the following devices
NAS hardware

DSM 7.3 (LTS) is an LTS release! It goes EOL in October 2028

--

FS Series: FS3017, FS2017, FS1018

XS Series: DS3018xs, DS3617xs, DS3617xsII, RS18016xs+, RS18017xs+, RS3617xs, RS3617RPxs, RS3617xs+, RS3618xs, RS4017xs+, RS819, RS217

Plus Series: DS216+, DS216+II, DS716+, DS716+II, DS916+, DS718+, DS918+, DS1019+, DS218+, DS1517+, DS1817+, DS1618+, DS1819+, DS2419+, DS2419+II, RS2416+, RS2416RP+, RS2418+, RS2418RP+, RS2818RP+, RS818+, RS818RP+, RS1219+

Value Series: DS116, DS216, DS216play, DS416, DS416play, RS816, DS118, DS218, DS218play, DS418, DS418play, DS1517, DS1817

J Series: DS216j, DS416j, DS416slim, DS419slim, DS418j, DS218j, DS119j

--

Release notes

https://www.synology.com/en-us/releaseNote/DSM?model=DS1821+%2B#7_3

--

DSM 7.3 Download for all models (It's a staged rollout, download the update here to install it "early".)

https://archive.synology.com/download/Os/DSM/7.3-81180

--

Support Support Whitepaper https://global.download.synology.com/download/Document/Software/WhitePaper/Os/DSM/All/enu/Software_Life_Cycle_Policy_enu.pdf


DS215j just for off site backups? DS215j just for off site backups?
NAS hardware

I have just obtained a DS215j with 2 x 4TB drives in it. I am aware that it is EOL, but would it be ok just to use it as an off site backup for my DS723+

I am aware that the DS215j cannot be updated past DSM 7.1.1-42962 Update 9.

I have also read up that Hyper Backup can only be used for Folder and Packages and not the Entire System.

Just thought I would ask before I take the time and effort to set it up.


Advice on potential upgrade Advice on potential upgrade
Solved

Hi all, I've been using a DJ213+ for years as a fairly basic file back up. I also store files on there for use through Plex, but it doesn;'t support running Plex server natively so I've tried several alternatives for that with varying degrees of success.

As my drives are fairly full and it's an old piece of hardware (although still running perfectly) I'm considering an upgrade, but I'm not an expert and the options seem pretty vast.

So, I need a way to simply backup files to the NAS from local machines on my network. I use Time Machine on my Mac to automatically backup one laptop, but Synology's Active Backup looks useful for family use.

I also ideally would like to run Plex Server directly on the NAS.

Finally, I've been using a cloud backup service for years for photos etc, would love to stop paying that if a NAS was a good alternative (though there are obviously limitations to this because it wouldn't protect me from fire / flood etc).

So given all of these uses, but in a home setting, I've looked at the 223, but also see some benefits from 224+. 423 gives me 4 bays but not sure if that's overkill...

Would appreciate any thoughts on things I should consider - there may be other benefits to the upgrade I haven't considered. Or should I just buy bigger disks for the 213 and stick with it?


DS1821+ USB failed - Synology asking to send it to Germany at my own cost? DS1821+ USB failed - Synology asking to send it to Germany at my own cost?
NAS hardware
  1. Decided on Synology because I wanted it to just work and have somewhere to go if things stop working

  2. Bought from box.co.uk, December 2022.

  3. Box.co.uk goes into administration.

  4. External HDD plugged into the rear USB stops responding.

  5. HDD works plugged into front or into PC, few USB sticks show same behavior - rear USB ports seem dead.

  6. Synology support confirms unit is still under warranty and asks to RMA with seller.

  7. Synology support offers to handle RMA since. box.co.uk entered administration but ask me to ship to Germany at my own risk and expense (and be with no bad for all that time).

Anyone seen this before?
Normally if box were still trading I would have strong statutory rights but since they are not (they resurrected but as a new legal entity) and I didn't pay with a credit card I have next to no rights with Synology. No idea what to do



Addressing the DSM 7.3 elephant in the room Addressing the DSM 7.3 elephant in the room
Solved

I've only seen 2 people mention the table on the latest version of Synology's Drive_compatibility_policies page (Robbie from NASCompares and Luka from Blackvoid) but I don't think anybody has mentioned that DSM 7.3 actually ADDS restrictions to some NAS series. I've been waiting for someone to point out what it actually means, but nobody has so here's my take on it.

EDIT I just noticed u/nascompares posted about it 2 days ago here.

DSM 7.3 is a good update for x25 plus owners (current and future) but it's a slap in the face for existing owners of RS Plus and DVA/NVR series and especially FS, HD, SA, UC, XS+, XS, and DP series owners.

  1. DSM 7.3 has added restrictions on any drive "on the incompatibility list" for all NAS models.

  2. DSM 7.3 has added the hated "x25 plus series" restrictions to RS Plus and DVA/NVR series.

  3. DSM 7.3 has added the hated "x25 plus series" restrictions and restrictions on SATA SSDs to FS, HD, SA, UC, XS+, XS, and DP series.

  4. DSM 7.3 has removed the hated "x25 plus series" restrictions (except for NVMe cache creation) for x25 plus series.

  5. DSM 7.3 hasn't changed anything for Value and J series (apart from point 1).

It's like Synology thought they'd appease the x25 plus owners, while sneaking in the hated restrictions for existing RS Plus, DVA/NVR, FS, HD, SA, UC, XS+, XS, and DP series.

Someone actually told me months ago that points 2 and 3 was coming before the end of this year.


剣を振るう凶悪な妖怪・ひとつめ。その正体を目撃するのはRPGゲーム「Kaminari」だ。


Thoughts on Extended Support glitch Thoughts on Extended Support glitch
NAS hardware

When the 2025 models came out and there was some uncertainty about supporting drives etc. I thought to future proof a bit and invested in 4x8TB Synology HATs for my storage upgrade.

I had spent a bit of money and had already added 2xSynology SSDs for a RW cache and then thought it was probably worth taking the extended support.... but no dice, the web page told me the unit was ineligible so I put this down to the fact that the DS925+ had come out and they wouldn't offer it on the DS923+ which I bought last year.

Time moved on and the 7.3 update was announced and I thought to try going for extended support again and it let me do it.. I submitted a screenshot of the Amazon order from 2024 when I got the unit, it was processed and I now have extended support... until 2030 apparently.

Then I noticed some messages that said you have to but extended support within 90 days. I'm really not sure where that leaves me now after the initial 3 years... I mean, they accepted my order but it looks like they took it as I just purchased the DS923+ based on the 2030 exiry rather than it being 2029 based on the purchase date.

Any particular thoughts on this?



Ds1821+ disconnected from my network Ds1821+ disconnected from my network
Solved

Well this was fun. My DS1821+ dropped off the network while I was traveling. The last Active Insight report shows volume utilization suddenly spiking to 100 percent right before it went offline. No major activity was running, no backups or transfers, nothing that should have filled the drives. Other machines on the same network are fine, but the router cannot see the Synology at all now.

I cannot fix it remotely so I am not asking for help, just opinions on what might have happened. I checked for intrusions or weird login activity and there was nothing. A couple of docker containers were running, Plex server was up, but CPU usage was very low, total storage usage was only 50% full, and nothing out of the ordinary seems to have been happening. Synology was configured to SHR-2, so 3 drives would have had to fail to cause problems .

What do you think this points to, a filesystem or RAID event rather than a literal full disk situation? Curious to hear what others have seen with similar symptoms.



Transferring an Apple .photoslibrary to a FlashStation failing. Need help troubleshooting. Transferring an Apple .photoslibrary to a FlashStation failing. Need help troubleshooting.
Networking & security

Hi folks! Looking for some help with transferring a large "package" file from Mac OS to an all SSD FlashStation over 10GBe. I am trying to move my 200GB Apple photos library to be stored on my Synology FlashStation as my Mac Studio's internal drive is pretty small and I can then have a backup infrastructure setup to store my iPhone photos off the cloud as well.

I have tried copying the file to the FS using a file copier "Off Shoot" (https://hedge.co/products/offshoot) which is designed for mission-critical file transfers and using the Finder but repeated attempts using both methods repeatedly fail. The transfers start out with a very quick estimated time of completion (several minutes) but then extend to 1 hr, several hours etc. and then eventually fail. I have a utility, Auto Mounter, to remount the SMB share of the FS just in case it unmounts. Normally I am able to move jobs of several TB's from this Mac over to the FS but this "package" .photoslibrary keeps failing after trasnfer times extend into the hours.

Any other utilities I should try to use or something else I can be looking to do? Also, if it isn't advisable not to be storing my Apple photos library on the NAS, please chime in!

Edit: I just saw the below guide and perhaps I'm going about this the wrong way and need to be doing this below: https://www.reddit.com/r/synology/comments/10k9a0l/the_idiots_guide_to_syncing_icloud_photos_to/

Thanks much!



Download Station hacked or a glitch in the Matrix? Download Station hacked or a glitch in the Matrix?
NAS Apps

So this is what happened:

  • I suddenly lost the connection to my NAS. I couldn't reach it using SMB nor DSM:s web interface.

  • I unplugged the network cable and put in back in and after a while I could reach my NAS again.

  • I noted some strange things, one of my VMs couldn't start for example, so I rebooted my NAS.

  • After the restart I could start my VM again, but now I noticed that all downloads were gone in Download Station and the volume used for downloads had 100% free disk space (was almost full before). So, all downloads had been wiped.

So confused about why it happened I started investigating and had a look at dmesg and found this:

> TCP: request_sock_TCP: Possible SYN flooding on port 16881. Sending cookies. Check SNMP counters.

Hold on, that's the port that "Download Station" is using for BT. So I asked my friend (AI) and got the following answer:

"indicates that the system is receiving a high volume of connection requests on port 16881, possibly a denial-of-service (DoS) attack, and is using SYN cookies to mitigate it"

It seams that Download Station is using Transmission 2.93, but looking at the release notes there's been several security vulnerability fixes.

What do you guys think? Could it be that someone found a way (a vulnerability) to perform a RCE attack to wipe all data? Or do you think that this is a bug in Download Station?

Upon starting Download Station again (after the restart), I had to set "Temporary location" again. All other settings were intact. The app is installed on the same volume that is used for temporary data, so it doesn't look like the configuration was wiped, just the downloads, even though the same volume is used for both.

Should I be worried or was this just a glitch in the Matrix?



Where should you invest $10,000 right now? We asked four experts.

With the S&P 500 and Bitcoin tearing up the charts, are those red-hot areas the best places to invest $10,000 right now? 

In the latest edition of Where to Invest, one expert Bloomberg asked about timely opportunities counsels going long on the US and AI. Others, however, point to areas of the US and European markets that may offer greater value and the potential for continued momentum in coming months and years. Favored sectors run from defense to industrials to life sciences tools companies and banks. 

When the four wealth advisers were asked where they’d spend $10,000 on a personal interest, ideas stretched from buying whole genome sequencing for the family, to a trip to Australia with loved ones, to following a favorite sports team around the world.

Read the full story here. 

____________________________________________

For more in the series:

  • Where to invest $100,000

  • Where to invest $1 million


Using Synology (DS920+) as OIDC server with restricted users Using Synology (DS920+) as OIDC server with restricted users
NAS Apps

I have SSO Server installed on my DS920+ and wanted to use it as an OIDC auth provider for some self-hosted services, but I can't see any way to limit the users that have access to each application. For example, I have ActualBudget running in a docker container, and it will allow access to anyone authenticated, but I don't want to allow all users access to that application.

ActualBudget is currently protected by Authentik (in a docker container on the NAS) but it seems a bit CPU heavy even when it should be completely idle, and the disks seem to be constantly in use when there should really be no load.

Can the SSO Server package limit users? Or am I misunderstanding the purpose of the package?

I also installed the OAuth Service, but not sure how that is supposed to work because the window doesn't show any way to add a new application.


Upgrading to DS1525+ Advice on syncing data to new unit Upgrading to DS1525+ Advice on syncing data to new unit
DSM

I have a new chassis and I'm wondering the best way to sync my files. I currently have everything on the new unit with a shared folder rsync task. I assume that should be fine but my concern is i have several veeam backup jobs pointed at those folders. I'm worried when i update the backup jobs I will run into permissions issue since rsync keeps all of the permissions of the old NAS.

Any advice or resources you guys think would help me here?



Transferring my Apple Photos Library to an SSD FlashStation Transferring my Apple Photos Library to an SSD FlashStation
Networking & security

Hi folks! Looking for some help with transferring a large "package" file from Mac OS to an all SSD FlashStation over 10GBe. I am trying to move my 200GB Apple photos library to be stored on my Synology FlashStation as my Mac Studio's internal drive is pretty small and I can then have a backup infrastructure setup to store my iPhone photos off the cloud as well.

I have tried copying the file to the FS using a file copier "Off Shoot" (https://hedge.co/products/offshoot) which is designed for mission-critical file transfers and using the Finder but repeated attempts using both methods repeatedly fail. The transfers start out with a very quick estimated time of completion (several minutes) but then extend to 1 hr, several hours etc. and then eventually fail. I have a utility, Auto Mounter, to remount the SMB share of the FS just in case it unmounts. Normally I am able to move jobs of several TB's from this Mac over to the FS but this "package" .photoslibrary keeps failing after trasnfer times extend into the hours.

Any other utilities I should try to use or something else I can be looking to do? Also, if it isn't advisable not to be storing my Apple photos library on the NAS, please chime in!

Thanks much!



Looking for Recommendations on Synology NAS to use with WD121KFBX Drives Looking for Recommendations on Synology NAS to use with WD121KFBX Drives
NAS hardware

I would like to purchase a Synology NAS for home file storage use. I have five Western Digital 12 TB Red Pro drives (WD121KFBX ) that I would like to use with the NAS. I was looking at getting the DiskStation 1525+ but when I looked at the compatibility list the only drives listed are the ones Synology sells. My understanding is that they only list drives that they have specifically tested but I am concerned that there may be something in the Synology firmware that blocks the use of the particular drives I have (or other models).

This concern comes from reading posts and reviews where multiple people have mentioned that they couldn't use certain drives that were not on the verified drive list.

Has anyone used the WD121KFBX with a DS1525+?

https://preview.redd.it/looking-for-recommendations-on-synology-nas-to-use-with-v0-49w1yflv9ksf1.png



Preference-aware routing (to hosted LLMs) for Claude Code 2.0

After 15 years of archiving my family data on my (third) 2-bay Synology NAS with SHR-1 with regular data scrubbing/health tasks, I discovered: checksum option was likely never enabled. How bad is this and what do I do next? After 15 years of archiving my family data on my (third) 2-bay Synology NAS with SHR-1 with regular data scrubbing/health tasks, I discovered: checksum option was likely never enabled. How bad is this and what do I do next?
NAS hardware

I partly blame Synology for not enabling the checksum option by default or emphasizing the importance of data scrubbing with checksums.

  1. My Synology DS220+ NAS is functioning well, but I’m unsure how much data degradation has occurred over 15 years. Can I identify any damage?

  2. To address this, do I need an external drive, or can I resolve it within my existing setup?

https://preview.redd.it/after-15-years-of-archiving-my-family-data-on-my-third-2-v0-xsad813x8krf1.png

Synology DS220+ (dual-bay)
Volume 1: Btrfs Storage Pool: One pool, SHR-1.
Affected Shared Folder Capacity: 3.6 TB total, 700 GB free.
Thanks!



How to migrate data and user from DS220j to DS1522+? How to migrate data and user from DS220j to DS1522+?
NAS hardware

Hi everyone,

I recently got a DS1522+ and would like to migrate my data from a DS220j. I didn’t buy new drives and would like to continue using the existing ones (2x 4TB). They are currently running in SHR with ext4 on the DS220j, with 1.5TB used out of 3.6TB available.

I’d like to use btrfs on the DS1522+, which means a simple HDD migration (plug & play) is not an option since the file system would need to be changed.

  • Migration Assistant doesn’t work because there are no drives installed in the DS1522+ yet.

  • Hyper Backup requires double the storage space of the currently used data.

Right now, I’m not sure what the best way is to use my existing drives with the new file system without losing my data. Ideally, I’d also like to keep all system settings and user accounts.

I do have an internal 2TB drive that should (in theory) be able to hold all my current data. Would it be possible to:

  1. Put the 2TB HDD in the DS1522+ and copy all my data onto it,

  2. Then reformat the 2x 4TB drives with btrfs in the DS1522+,

  3. And finally expand the storage pool with the 4TB drives?

Any advice on the safest migration path would be greatly appreciated!


Upgrading the Ethernet in my Synology DS1821+ to 2.5GbE, does a compatible 2.5GbE PCIe card exist or should I get the 10GbE card from Synology? Upgrading the Ethernet in my Synology DS1821+ to 2.5GbE, does a compatible 2.5GbE PCIe card exist or should I get the 10GbE card from Synology?
NAS hardware

I just upgraded my Thunderbolt dock to the CalDigit TS4 which has 2.5GbE on it. So now I'm thinking to add 2.5GbE to my DS1821+ and upgrade my network switch.

Upgrading the switch seems easy enough to do but as for the DS1821+, all I can find is people using a USB adapter and installing unsupported drivers through SSH or just using the 10GbE PCIe card, nothing about adding a third party 2.5GbE PCIe card which are much much cheaper.

Does it exist? Are there 2.5GbE cards that work without installing drivers?

While I'm hoping to get a CalDigit TS5 Plus in the future (Which has 10GbE) I don't want to shell out on a 10GbE switch and I don't use my NAS in a way that would saturate it anyway. 2.5GbE would be perfect because I think my hard drives would just about saturate it (Based on when I plug things into the USB ports and look at the transfer speed).

Should I just bite the bullet and get that 10GbE card from Synology? The funny thing is, I bought the DS1821+ recently to stick to Synology for a little while longer and avoid the drive restrictions...imagine if they didn't pull that shit, I could have a DS1825+ with 2.5GbE built into it.