Synology Drive Client for Linux has a data-loss bug Synology refuses to fix; here’s a workaround

By | March 17, 2021

I use GnuCash to track my finances. I run GnuCash on three different computers: two Linux and one Mac. For a long time I was using a shell-script wrapper to sync my GnuCash data file between the computers when launching GnuCash, but I recently decided to store the file on my Synology NAS and synchronize it between computers using Synology Drive Client.

Unfortunately, I quickly noticed a significant problem: when I edited my GnuCash data on Mac, it was successfully synchronized onto the NAS as soon as I saved it, but when I edited on Linux, it wasn’t. Then, the next time I edited and saved on Mac, Linux decided there was a conflict between the edited version it had and the updated version sent over from the Mac, so it uploaded its conflicting version onto the NAS, and suddenly I was faced with two different, divergent versions of my GnuCash data file. I then had to merge these by hand, figuring out all the changes in both files from their common ancestor and merging them into one file to avoid losing data. Even worse, if I edited on Linux 1, then edited on Linux 2, then edited on the Mac, I was ending up with three conflicting versions of the data file, with three different sets of changes. Oy!

The root cause of this is actually quite straightforward: on Linux, when a hard link is created within a Drive Client folder, the client does not notice the hard link or upload the file to the NAS. When GnuCash saves a modified data file on Linux, it first saves the file under a temporary file name, then deletes the older version of the file with its “real” file name, then creates a hard link from that name to the temporary file, then deletes the temporary file.

The macOS version of Drive Client does not have this bug. The Linux version of the Dropbox Client does not have this bug.

I reported this problem to Synology Support. Even after I explained to them exactly what the problem is and even explained to them how to reproduce it easily, they refused to acknowledge that the behavior is incorrect or commit to fixing it.

To work around this issue, I wrote a Python script which scrapes the list of sync directories from the Drive Client SQLite database, sets up watchers for files created within those directories, and every time it detects that a file has been created, it updates the timestamp on the file, which tricks Drive Client into noticing the file and synchronizing it to the NAS.

Here’s the script:

#!/usr/bin/env python3
import inotify.adapters
import logging
import logging.handlers
import os
import sqlite3
import stat
import sys
import threading
logger = None
sys_db_path = os.path.expanduser('~/.SynologyDrive/data/db/sys.sqlite')
class TaskWatcher(object):
    def __init__(self, path):
        if path.endswith(os.path.sep):
            path = os.path.dirname(path)'Starting watcher for {}'.format(path))
        self.path = path
        self.synology_dir = os.path.join(self.path,
        self.obsolete = False
        self.inotify = inotify.adapters.InotifyTree(self.path)
        self.thread = threading.Thread(, daemon=True)
    def watch(self):
        for event in self.inotify.event_gen(yield_nones=False):
            if self.obsolete:
      'Exiting obsolete watcher for {}'.format(
            (_, type_names, path, filename) = event
            if path == self.synology_dir:
            if 'IN_CREATE' not in type_names:
            full_path = os.path.join(path, filename)
                stat_obj = os.stat(full_path, follow_symlinks=False)
            except Exception:
            if not stat.S_ISREG(stat_obj.st_mode):
  'Touching {}'.format(full_path))
                os.utime(full_path, times=(stat_obj.st_mtime,
            except Exception as e:
      'Failed to touch {} ({}), continuing'.format(
                    full_path, e))
    def wait(self):
def find_tasks():
    conn = sqlite3.connect(sys_db_path)
    cursor = conn.cursor()
    cursor.execute('SELECT sync_folder from session_table')
    return list(r[0] for r in cursor)
def watch_tasks():
    watchers = {}
    for path in find_tasks():
        watchers[path] = TaskWatcher(path)
    i = inotify.adapters.Inotify()
    for event in i.event_gen(yield_nones=False):
        (_, type_names, path, filename) = event
        if 'IN_MODIFY' not in type_names:
            continue'Rescanning tasks.')
            tasks = find_tasks()
        except Exception as e:
  'Failed to open {} ({}), continuing without it'.format(
                sys_db_path, e))
        new_watchers = {}
        for task in tasks:
            if task in watchers:
                new_watchers[task] = watchers.pop(task)
                new_watchers[task] = TaskWatcher(task)
        for task, watcher in watchers.items():
  'Telling watcher for {} to exit'.format(task))
            watcher.obsolete = True
        watchers = new_watchers
def main():
    global logger
    logger = logging.getLogger(os.path.basename(sys.argv[0]))
    handler = logging.handlers.SysLogHandler(address='/dev/log')
if __name__ == '__main__':

Note that the script depends on some non-standard modules you’ll have to install from your OS package manager or PyPI.

Here’s the trivial systemd unit file I use to run the script on my Linux computers (obviously, you’ll have to change the path for to wherever you put the script) as a systemd user service when I log in (if you don’t understand what that means, perhaps you shouldn’t be trying to run this script with systemd 😉 ):

Description=Force hard-linked files to sync to Synology Drive



Perhaps this will be useful to someone other than me! If so, post a comment or email me and let me know.

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published.