Synology Drive Client for Linux has a data-loss bug Synology refuses to fix; here’s a workaround

By | March 17, 2021

[For the record, as of January 12, 2023, this bug still isn’t fixed in the current version, 7.2.1, of Synology Drive Client for Linux, more than two years after I reported it to Synology. The workaround described below still works. However, thanks to the assistance of an extremely helpful and competent Synology support engineer, I believe I have successfully convinced Synology that there is a bug here that they should fix, and they claim they’ve put it into the queue to be fixed as resources permit. So maybe we’ll get a fix at some point!]

I use GnuCash to track my finances. I run GnuCash on three different computers: two Linux and one Mac. For a long time I was using a shell-script wrapper to sync my GnuCash data file between the computers when launching GnuCash, but I recently decided to store the file on my Synology NAS and synchronize it between computers using Synology Drive Client.

Unfortunately, I quickly noticed a significant problem: when I edited my GnuCash data on Mac, it was successfully synchronized onto the NAS as soon as I saved it, but when I edited on Linux, it wasn’t. Then, the next time I edited and saved on Mac, Linux decided there was a conflict between the edited version it had and the updated version sent over from the Mac, so it uploaded its conflicting version onto the NAS, and suddenly I was faced with two different, divergent versions of my GnuCash data file. I then had to merge these by hand, figuring out all the changes in both files from their common ancestor and merging them into one file to avoid losing data. Even worse, if I edited on Linux 1, then edited on Linux 2, then edited on the Mac, I was ending up with three conflicting versions of the data file, with three different sets of changes. Oy!

The root cause of this is actually quite straightforward: on Linux, when a hard link is created within a Drive Client folder, the client does not notice the hard link or upload the file to the NAS. When GnuCash saves a modified data file on Linux, it first saves the file under a temporary file name, then deletes the older version of the file with its “real” file name, then creates a hard link from that name to the temporary file, then deletes the temporary file.

The macOS version of Drive Client does not have this bug. The Linux version of the Dropbox Client does not have this bug.

I reported this problem to Synology Support. Even after I explained to them exactly what the problem is and even explained to them how to reproduce it easily, they refused to acknowledge that the behavior is incorrect or commit to fixing it.

To work around this issue, I wrote a Python script which scrapes the list of sync directories from the Drive Client SQLite database, sets up watchers for files created within those directories, and every time it detects that a file has been created, it updates the timestamp on the file, which tricks Drive Client into noticing the file and synchronizing it to the NAS.

Here’s the script:

#!/usr/bin/env python3

import inotify.adapters
import logging
import logging.handlers
import os
import sqlite3
import stat
import sys
import threading

logger = None
sys_db_path = os.path.expanduser('~/.SynologyDrive/data/db/sys.sqlite')


class TaskWatcher(object):
    def __init__(self, path):
        if path.endswith(os.path.sep):
            path = os.path.dirname(path)
        logger.info('Starting watcher for {}'.format(path))
        self.path = path
        self.synology_dir = os.path.join(self.path,
                                         '.SynologyWorkingDirectory')
        self.obsolete = False
        self.inotify = inotify.adapters.InotifyTree(self.path)
        self.thread = threading.Thread(target=self.watch, daemon=True)
        self.thread.start()

    def watch(self):
        for event in self.inotify.event_gen(yield_nones=False):
            if self.obsolete:
                logger.info('Exiting obsolete watcher for {}'.format(
                    self.path))
                return
            (_, type_names, path, filename) = event
            if path == self.synology_dir:
                continue
            if 'IN_CREATE' not in type_names:
                continue
            full_path = os.path.join(path, filename)
            try:
                stat_obj = os.stat(full_path, follow_symlinks=False)
            except Exception:
                continue
            if not stat.S_ISREG(stat_obj.st_mode):
                continue
            logger.info('Touching {}'.format(full_path))
            try:
                os.utime(full_path, times=(stat_obj.st_mtime,
                                           stat_obj.st_mtime))
            except Exception as e:
                logger.info('Failed to touch {} ({}), continuing'.format(
                    full_path, e))

    def wait(self):
        self.thread.join()


def find_tasks():
    conn = sqlite3.connect(sys_db_path)
    cursor = conn.cursor()
    cursor.execute('SELECT sync_folder from session_table')
    return list(r[0] for r in cursor)


def watch_tasks():
    watchers = {}
    for path in find_tasks():
        watchers[path] = TaskWatcher(path)
    i = inotify.adapters.Inotify()
    i.add_watch(sys_db_path)
    for event in i.event_gen(yield_nones=False):
        (_, type_names, path, filename) = event
        if 'IN_MODIFY' not in type_names:
            continue
        logger.info('Rescanning tasks.')
        try:
            tasks = find_tasks()
        except Exception as e:
            logger.info('Failed to open {} ({}), continuing without it'.format(
                sys_db_path, e))
            continue
        new_watchers = {}
        for task in tasks:
            if task in watchers:
                new_watchers[task] = watchers.pop(task)
            else:
                new_watchers[task] = TaskWatcher(task)
        for task, watcher in watchers.items():
            logger.info('Telling watcher for {} to exit'.format(task))
            watcher.obsolete = True
        watchers = new_watchers


def main():
    global logger
    logger = logging.getLogger(os.path.basename(sys.argv[0]))
    logger.setLevel(logging.DEBUG)
    handler = logging.handlers.SysLogHandler(address='/dev/log')
    logger.addHandler(handler)
    watch_tasks()


if __name__ == '__main__':
    main()

Note that the script depends on some non-standard modules you’ll have to install from your OS package manager or PyPI.

Here’s the trivial systemd unit file I use to run the script on my Linux computers (obviously, you’ll have to change the path for to wherever you put the script) as a systemd user service when I log in (if you don’t understand what that means, perhaps you shouldn’t be trying to run this script with systemd 😉 ):

[Unit]
Description=Force hard-linked files to sync to Synology Drive

[Service]
Type=exec
ExecStart=/home/jik/bin/synology-inotify.py

[Install]
WantedBy=default.target

Perhaps this will be useful to someone other than me! If so, post a comment or email me and let me know.

Print Friendly, PDF & Email
Share

Leave a Reply

Your email address will not be published. Required fields are marked *