diff --git a/12.0/search/search_index.json b/12.0/search/search_index.json index 92a3a551a..7f44535c8 100644 --- a/12.0/search/search_index.json +++ b/12.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature.

"},{"location":"#license","title":"LICENSE","text":"

The different components of Seafile project are released under different licenses:

"},{"location":"#contact-information","title":"Contact information","text":""},{"location":"changelog/","title":"Changelog","text":""},{"location":"changelog/#changelogs","title":"Changelogs","text":""},{"location":"administration/","title":"Administration","text":""},{"location":"administration/#enter-the-admin-panel","title":"Enter the admin panel","text":"

As the system admin, you can enter the admin panel by click System Admin in the popup of avatar.

"},{"location":"administration/#account-management","title":"Account management","text":""},{"location":"administration/#logs","title":"Logs","text":""},{"location":"administration/#backup-and-recovery","title":"Backup and Recovery","text":"

Backup and recovery:

Recover corrupt files after server hard shutdown or system crash:

You can run Seafile GC to remove unused files:

"},{"location":"administration/#clean-database","title":"Clean database","text":""},{"location":"administration/#export-report","title":"Export report","text":""},{"location":"administration/account/","title":"Account Management","text":""},{"location":"administration/account/#user-management","title":"User Management","text":"

When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries.

"},{"location":"administration/account/#how-to-change-a-users-id","title":"How to change a user's ID","text":"

Since version 11.0, if you need to change a user's external ID, you can manually modify database table social_auth_usersocialauth to map the new external ID to internal ID.

For version below 11.0, if you really want to change a user's ID, you should create a new user then use this admin API to migrate the data from old user to the new user: https://download.seafile.com/published/web-api/v2.1-admin/accounts.md#user-content-Migrate%20Account.

"},{"location":"administration/account/#resetting-user-password","title":"Resetting User Password","text":"

Administrator can reset password for a user in \"System Admin\" page.

In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email.

"},{"location":"administration/account/#forgot-admin-account-or-password","title":"Forgot Admin Account or Password?","text":"

You may run reset-admin.sh script under seafile-server directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account.

"},{"location":"administration/account/#user-quota-notice","title":"User Quota Notice","text":"

Under the seafile-server-latest directory, run ./seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.

"},{"location":"administration/auditing/","title":"Access log and auditing","text":"

In the Pro Edition, Seafile offers four audit logs in system admin panel:

The logging feature is turned off by default before version 6.0. Add the following option to seafevents.conf to turn it on:

[Audit]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n

The audit log data is being saved in seahub_db.

"},{"location":"administration/backup_recovery/","title":"Backup and Recovery","text":""},{"location":"administration/backup_recovery/#overview","title":"Overview","text":"

There are generally two parts of data to backup

If you setup seafile server according to our manual, you should have a directory layout like:

/opt/seafile\n  --seafile-server-9.0.x # untar from seafile package\n  --seafile-data   # seafile configuration and data (if you choose the default)\n  --seahub-data    # seahub data\n  --logs\n  --conf\n

All your library data is stored under the '/opt/seafile' directory.

Seafile also stores some important metadata data in a few databases. The names and locations of these databases depends on which database software you use.

For SQLite, the database files are also under the '/opt/seafile' directory. The locations are:

For MySQL, the databases are created by the administrator, so the names can be different from one deployment to another. There are 3 databases:

"},{"location":"administration/backup_recovery/#backup-steps","title":"Backup steps","text":"

The backup is a three step procedure:

  1. Optional: Stop Seafile server first if you're using SQLite as database.
  2. Backup the databases;
  3. Backup the seafile data directory;
"},{"location":"administration/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"

The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data.

We assume your seafile data directory is in /opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:

/backup\n---- databases/  contains database backup files\n---- data/  contains backups of the data directory\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"

It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.

MySQL

Assume your database names are ccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.

mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n

SQLite

You need to stop Seafile server first before backing up SQLite database.

sqlite3 /opt/seafile/ccnet/GroupMgr/groupmgr.db .dump > /backup/databases/groupmgr.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/ccnet/PeerMgr/usermgr.db .dump > /backup/databases/usermgr.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/seafile-data/seafile.db .dump > /backup/databases/seafile.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/seahub.db .dump > /backup/databases/seahub.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"

The data files are all stored in the /opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.

To directly copy the whole data directory,

cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\n

This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.

If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.

rsync -az /opt/seafile /backup/data\n

This command backup the data directory to /backup/data/seafile.

"},{"location":"administration/backup_recovery/#restore-from-backup","title":"Restore from backup","text":"

Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance:

  1. Copy /backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.
  2. Restore the database.
  3. Since database and data are backed up separately, they may become a little inconsistent with each other. To correct the potential inconsistency, run seaf-fsck tool to check data integrity on the new machine. See seaf-fsck documentation.
"},{"location":"administration/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"

Now with the latest valid database backup files at hand, you can restore them.

MySQL

mysql -u[username] -p[password] ccnet_db < ccnet-db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile-db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub-db.sql.2013-10-19-16-01-05\n

SQLite

cd /opt/seafile\nmv ccnet/PeerMgr/usermgr.db ccnet/PeerMgr/usermgr.db.old\nmv ccnet/GroupMgr/groupmgr.db ccnet/GroupMgr/groupmgr.db.old\nmv seafile-data/seafile.db seafile-data/seafile.db.old\nmv seahub.db seahub.db.old\nsqlite3 ccnet/PeerMgr/usermgr.db < usermgr.db.bak.xxxx\nsqlite3 ccnet/GroupMgr/groupmgr.db < groupmgr.db.bak.xxxx\nsqlite3 seafile-data/seafile.db < seafile.db.bak.xxxx\nsqlite3 seahub.db < seahub.db.bak.xxxx\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"

We assume your seafile volumns path is in /opt/seafile-data. And you want to backup to /backup directory.

The data files to be backed up:

/opt/seafile-data/seafile/conf  # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"
# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mysqldump  -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump  -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump  -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"
cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"
rsync -az /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"
docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n
"},{"location":"administration/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"
cp -R /backup/data/* /opt/seafile-data/seafile/\n
"},{"location":"administration/clean_database/","title":"Clean Database","text":""},{"location":"administration/clean_database/#seahub","title":"Seahub","text":""},{"location":"administration/clean_database/#session","title":"Session","text":"

Use the following command to clear expired session records in Seahub database:

cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n
"},{"location":"administration/clean_database/#activity","title":"Activity","text":"

Use the following command to clear the activity records:

use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\n

The corresponding items in UserActivity will deleted automatically by MariaDB when the foreign keys in Activity table are deleted.

"},{"location":"administration/clean_database/#login","title":"Login","text":"

Use the following command to clean the login records:

use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n
"},{"location":"administration/clean_database/#file-access","title":"File Access","text":"

Use the following command to clean the file access records:

use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-update","title":"File Update","text":"

Use the following command to clean the file update records:

use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#permisson","title":"Permisson","text":"

Use the following command to clean the permission change audit records:

use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-history","title":"File History","text":"

Use the following command to clean the file history records:

use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#command-clean_db_records","title":"Command clean_db_records","text":"

Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago:

cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clean_db_records\n
"},{"location":"administration/clean_database/#outdated-library-data","title":"Outdated Library Data","text":"

Since version 6.2, we offer command to clear outdated library records in Seahub database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time.

cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n

This command has been improved in version 10.0, including:

  1. It will clear the invalid data in small batch, avoiding consume too much database resource in a short time.

  2. Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g.

cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n
"},{"location":"administration/clean_database/#library-sync-tokens","title":"Library Sync Tokens","text":"

There are two tables in Seafile db that are related to library sync tokens.

When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query:

delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n

xxxx is the UNIX timestamp for the time before which tokens will be deleted.

To be safe, you can first check how many tokens will be removed:

select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"administration/export_report/","title":"Export Report","text":"

Since version 7.0.8 pro, Seafile provides commands to export reports via command line.

"},{"location":"administration/export_report/#export-user-traffic-report","title":"Export User Traffic Report","text":"
cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_traffic_report --date 201906\n
"},{"location":"administration/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"
cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"administration/export_report/#export-file-access-log","title":"Export File Access Log","text":"
cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"administration/logs/","title":"Logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":""},{"location":"administration/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":""},{"location":"administration/seafile_fsck/","title":"Seafile FSCK","text":"

On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git).

With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible.

Note: If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt.

We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments:

cd seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n

There are three modes of operation for seaf-fsck:

  1. checking integrity of libraries.
  2. repairing corrupted libraries.
  3. exporting libraries.
"},{"location":"administration/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"

Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries.

cd seafile-server-latest\n./seaf-fsck.sh\n

If you want to check integrity for specific libraries, just append the library id's as arguments:

cd seafile-server-latest\n./seaf-fsck.sh [library-id1] [library-id2] ...\n

The output looks like:

[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n

The corrupted files and directories are reported.

Sometimes you can see output like the following:

[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n

This means the \"head commit\" (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state.

Tips: If you have many libraries, it's helpful to save the fsck output into a log file for later analysis.

"},{"location":"administration/seafile_fsck/#repairing-corruption","title":"Repairing Corruption","text":"

Corruption repair in seaf-fsck basically works in two steps:

  1. If the library state (commit) recorded in database is not found in data directory, find the last available state from data directory.
  2. Check data integrity in that specific state. If files or directories are corrupted, set them to empty files or empty directories. The corrupted paths will be reported, so that the user can recover them from somewhere else.

Running the following command repairs all the libraries:

cd seafile-server-latest\n./seaf-fsck.sh --repair\n

Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command:

cd seafile-server-latest\n./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n

After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths.

"},{"location":"administration/seafile_fsck/#best-practice-for-repairing-a-library","title":"Best Practice for Repairing a Library","text":"

To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files.

When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways:

"},{"location":"administration/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"

Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted.

In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process.

To skip checking file contents, add the \"--shallow\" or \"-s\" option to seaf-fsck.

"},{"location":"administration/seafile_fsck/#exporting-libraries-to-file-system","title":"Exporting Libraries to File System","text":"

You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system.

The command syntax is

cd seafile-server-latest\n./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\n

The argument top_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.

Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped.

"},{"location":"administration/seafile_gc/","title":"Seafile GC","text":"

Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server.

To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server.

The GC program cleans up two types of unused blocks:

  1. Blocks that no library references to, that is, the blocks belong to deleted libraries;
  2. If you set history length limit on some libraries, the out-dated blocks in those libraries will also be removed.
"},{"location":"administration/seafile_gc/#run-gc","title":"Run GC","text":""},{"location":"administration/seafile_gc/#dry-run-mode","title":"Dry-run Mode","text":"

To see how much garbage can be collected without actually removing any garbage, use the dry-run option:

seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n

The output should look like:

[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n

If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked.

Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program.

"},{"location":"administration/seafile_gc/#removing-garbage","title":"Removing Garbage","text":"

To actually remove garbage blocks, run without the --dry-run option:

seaf-gc.sh [repo-id1] [repo-id2] ...\n

If libraries ids are specified, only those libraries will be checked for garbage.

As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature:

seaf-gc.sh -r\n

Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected.

"},{"location":"administration/seafile_gc/#removing-fs-objects","title":"Removing FS objects","text":"

Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option:

seaf-gc.sh --rm-fs\n

Note: This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug.

"},{"location":"administration/seafile_gc/#using-multiple-threads-in-gc","title":"Using Multiple Threads in GC","text":"

You can specify the thread number in GC. By default,

You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries:

seaf-gc.sh -t 20\n

Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output.

"},{"location":"administration/seafile_gc/#run-gc-based-on-library-id-prefix","title":"Run GC based on library ID prefix","text":"

Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel.

A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix.

seaf-gc.sh --id-prefix a123\n
"},{"location":"administration/seafile_gc/#gc-in-seafile-docker-container","title":"GC in Seafile docker container","text":"

To perform garbage collection inside the seafile docker container, you must run the /scripts/gc.sh script. Simply run docker exec <whatever-your-seafile-container-is-called> /scripts/gc.sh.

"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"

Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0).

"},{"location":"administration/security_features/#encrypted-library","title":"Encrypted Library","text":"

Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents.

There are a few limitation about this feature:

  1. File metadata is NOT encrypted. The metadata includes: the complete list of directory and file names, every files size, the history of editors, when, and what byte ranges were altered.
  2. The client side encryption does currently NOT work while using the web browser and the cloud file explorer of the desktop client. When you are browsing encrypted libraries via the web browser or the cloud file explorer, you need to input the password and the server is going to use the password to decrypt the \"file key\" for the library (see description below) and cache the password in memory for one hour. The plain text password is never stored or cached on the server.
  3. If you create an encrypted library on the web interface, the library password and encryption keys will pass through the server. If you want end-to-end protection, you should create encrypted libraries from desktop client only.
  4. For encryption protocol version 3 or newer, each library use its own salt to derive key/iv pairs. However, all files within a library shares the same salt. Likewise, all the files within a library are encrypted with the same key/iv pair. With encryption protocol version <= 2, all libraries use the same salt, but separate key/iv pairs.
  5. Encrypted library doesn't ensure file integrity. For example, the server admin can still partially change the contents of files in an encrypted library. The client is not able to detect such changes to contents.

The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0.

"},{"location":"administration/security_features/#how-does-an-encrypted-library-work","title":"How does an encrypted library work?","text":"

When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above).

The encryption procedure is:

  1. Generate a 32-byte long cryptographically strong random number. This will be used as the file encryption key (\"file key\").
  2. Encrypt the file key with the user provided password. We first use PBKDF2 algorithm (1000 iterations of SHA256) to derive a key/iv pair from the password, then use AES 256/CBC to encrypt the file key. The result is called the \"encrypted file key\". This encrypted file key will be sent to and stored on the server. When you need to access the data, you can decrypt the file key from the encrypted file key.
  3. All file data is encrypted by the file key with AES 256/CBC. We use PBKDF2 algorithm (1000 iterations of SHA256) to derive key/iv pair from the file key. After encryption, the data is uploaded to the server.

The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server.

When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by PBKDF2 algorithm with 1000 iterations of SHA256 hash.

For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server.

"},{"location":"administration/security_features/#why-fileserver-delivers-every-content-to-everybody-knowing-the-content-url-of-an-unshared-private-file","title":"Why fileserver delivers every content to everybody knowing the content URL of an unshared private file?","text":"

When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore.

"},{"location":"administration/security_features/#how-does-seafile-store-user-login-password","title":"How does Seafile store user login password?","text":"

User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is

PBKDF2SHA256$iterations$salt$hash\n

The record is divided into 4 parts by the $ sign.

To calculate the hash:

"},{"location":"administration/two_factor_authentication/","title":"Two-Factor Authentication","text":"

Starting from version 6.0, we added Two-Factor Authentication to enhance account security.

There are two ways to enable this feature:

After that, there will be a \"Two-Factor Authentication\" section in the user profile page.

Users can use the Google Authenticator app on their smart-phone to scan the QR code.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"

Note: Two new options are added in version 4.4, both are in seahub_settings.py

This version contains no database table change.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#449-20160229","title":"4.4.9 (2016.02.29)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#448-20151217","title":"4.4.8 (2015.12.17)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#447-20151120","title":"4.4.7 (2015.11.20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#446-20151109","title":"4.4.6 (2015.11.09)","text":"