Backup and Restore

Backup

In DolphinDB, data is backed up by partition. You can use function backup to back up partitions, tables, or a database.

DolphinDB offers 2 types of backup:

(1) Back up by copying files

When you specify the parameter dbPath for the backup function, the system copies the files to the target directory backupDir/dbName/tbName/chunkID. The metadata files _metaData and domain are generated under the directory backupDir/dbName/tbName.

The following example backs up table dfs://compoDB/pt by copying files:

$ backup("/home/DolphinDB/backup","dfs://compoDB",true);

(2) Back up with SQL statements

When you specify the parameter sqlObj for the backup function, the system serializes the data to a binary file and saves it as /.bin to the directory backupDir/dbName/tbName. The metadata files _metaData and domain are generated under the same directory.

The following example backs up table dfs://compoDB/pt with SQL statements:

$ backup("/home/DolphinDB/backup",<select * from loadTable("dfs://compoDB","pt")>,true);

DolphinDB provides the following encapsulated backup functions for one-click backup:

  • backupDB: backs up a database by copying files

  • backupTable: backs up a table by copying files

Restore

The following functions are provided in DolphinDB to restore data:

Function

migrate

restore

restoreDB

restoreTable

objects to be restored

all databases and tables

some or all partitions of a table

a database

a table

support backup with SQL statements

×

×

support backup by copying files

restore across storage engines

×

Related functions

  • getBackupList : Returns a table with backup information of a DFS table. Each row of the table corresponds to a backed up partition.

  • getBackupMeta : Returns a dictionary with backup information of a partition in a DFS table.

  • loadBackup : Returns a table of the backup/restore task status.

  • checkBackup : Checks the data integrity of the backup files.

  • getBackupStatus : Loads the backup of a partition in a DFS table.

Examples

Create a DFS database dfs://compoDB

$ n=1000000
$ ID=rand(100, n)
$ dates=2017.08.07..2017.08.11
$ date=rand(dates, n)
$ x=rand(10.0, n)
$ t=table(ID, date, x);

$ dbDate = database(, VALUE, 2017.08.07..2017.08.11)
$ dbID=database(, RANGE, 0 50 100);
$ db = database("dfs://compoDB", COMPO, [dbDate, dbID]);
$ pt = db.createPartitionedTable(t, `pt, `date`ID)
$ pt.append!(t);

Back up all data of table pt:

$ backup("/home/DolphinDB/backup",<select * from loadTable("dfs://compoDB","pt")>,true);

Specify where-conditions in SQL metacode to only back up the partitions with date>2017.08.10:

$ backup("/home/DolphinDB/backup",<select * from loadTable("dfs://compoDB","pt") where date>2017.08.10>,true);

Check the backup information of table pt:

$ getBackupList("/home/DolphinDB/backup","dfs://compoDB","pt");

Get information about the backup of the partition 20120810/0_50:

$ x = getBackupMeta("/home/DolphinDB/backup","dfs://compoDB/20170810/0_50","pt");

Load the backup of the partition 20120810/0_50 into memory.

$ loadBackup("/home/DolphinDB/backup","dfs://compoDB/20170810/0_50","pt");

Restore table pt from the backup to table pt of database dfs://db1:

$ migrate("/home/DolphinDB/backup", "dfs://compoDB", "pt", "dfs://db1", "pt")

The system will create a new database automatically while using function migrate.

Create table temp in dfs://compoDB with the same schema as table pt:

$ temp=db.createPartitionedTable(t, `pt, `date`ID);

Restore the backup of all partitions in table pt with date=2017.08.10 to table temp. Table temp has the same schema as table pt.

$ restore("/home/DolphinDB/backup","dfs://compoDB","pt","%20170810%",true,temp);