5. Commands

5.1. New Commands

5.1.1. Transaction Commands

These commands are used to manage the lifetime of Multi-statement Transactions. Please take note of the section on Drivers when using these commands.

command beginTransaction
{
 beginTransaction: 1,
 isolation:        '<string>'
}

Arguments:

Field Type Description
isolation string (optional) One of MVCC (default), Serializable, or Read Uncommitted

Begins a transaction associated with the current connection. Only one transaction may be live at a time on a connection.

Returns an error if there is already another live transaction for this connection.

Requires authentication (and authorization for write privileges) to create a Serializable transaction.

In the mongo shell, there is a helper function db.beginTransaction([isolation]) that wraps this command.

Example:

> db.foo.find()
{ "_id" : 1 }
> db.beginTransaction()
{ "status" : "transaction began", "ok" : 1 }
> db.foo.insert({_id : 2})
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
> db.foo.insert({_id : 3})
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
> db.rollbackTransaction()
{ "status" : "transaction rolled back", "ok" : 1 }
> db.foo.find()
{ "_id" : 1 }
command commitTransaction
{
  commitTransaction: 1
}

Commits the transaction associated with the current connection. This allows future queries to see this transaction’s writes, logs this transaction’s write operations to the oplog, and releases any Document-level Locks held.

Returns an error if there is no live transaction for this connection.

In the mongo shell, there is a helper function db.commitTransaction() that wraps this command.

Example:

> db.foo.find()
{ "_id" : 1 }
> db.beginTransaction()
{ "status" : "transaction began", "ok" : 1 }
> db.foo.insert({_id : 2})
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
> db.foo.insert({_id : 3})
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
> db.commitTransaction()
{ "status" : "transaction committed", "ok" : 1 }
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
command rollbackTransaction
{
  rollbackTransaction: 1
}

Rolls back the transaction associated with the current connection. This undoes all of this transaction’s writes and releases any Document-level Locks held.

Returns an error if there is no live transaction for this connection.

In the mongo shell, there is a helper function db.rollbackTransaction() that wraps this command.

Example:

> db.foo.find()
{ "_id" : 1 }
> db.beginTransaction()
{ "status" : "transaction began", "ok" : 1 }
> db.foo.insert({_id : 2})
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
> db.foo.insert({_id : 3})
> db.foo.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
> db.rollbackTransaction()
{ "status" : "transaction rolled back", "ok" : 1 }
> db.foo.find()
{ "_id" : 1 }

5.1.2. Loader Commands

These commands are used for controlling the Bulk Loader to build collections and indexes. They are used transparently by mongorestore and mongoimport but can be used separately by clients as well.

The bulk loader commands must be used inside a Multi-statement Transactions, and therefore cannot be used on a sharded cluster.

Example: This is an example of using this API from the mongo shell, using a primaryKey, building one secondary index, and specifying some Collection and Index Options for both indexes.

> db.beginTransaction()
{ "status" : "transaction began", "ok" : 1 }
> db.runCommand({beginLoad: 1,
...              ns:        'foo',
...              indexes:   [{key:          {x: 1, y: 1},
...                           ns:           'loader.foo',
...                           name:         'x_1_y_1',
...                           readPageSize: '8k'}],
...              options:   {compression: 'quicklz',
...                          primaryKey:  {a: 1, x: 1, _id: 1}}})
{ "status" : "load began", "ok" : true }
> db.foo.insert([{a: 100, x: 'john', y: new Date()},
...              {a: 200, x: 'leif', y: new Date()},
...              {a: 300, x: 'zardosht', y: new Date()},
...              {a: 0, x: 'tim', y: new Date()}])
> db.foo.insert({a: 400, x: 'bradley', y: new Date()})
> db.runCommand({commitLoad: 1})
{ "status" : "load committed", "ok" : true }
> db.commitTransaction()
{ "status" : "transaction committed", "ok" : 1 }
> db.foo.getIndexes()
[
    {
        "key" : {
            "a" : 1,
            "x" : 1,
            "_id" : 1
        },
        "unique" : true,
        "ns" : "loader.foo",
        "name" : "primaryKey",
        "clustering" : true,
        "compression" : "quicklz"
    },
    {
        "key" : {
            "_id" : 1
        },
        "unique" : true,
        "ns" : "loader.foo",
        "name" : "_id_",
        "compression" : "quicklz"
    },
    {
        "key" : {
            "x" : 1,
            "y" : 1
        },
        "ns" : "loader.foo",
        "name" : "x_1_y_1",
        "readPageSize" : "8k"
    }
]
> db.foo.stats()
{
        "ns" : "loadtest.foo",
        "count" : 5,
        "nindexes" : 3,
        "nindexesbeingbuilt" : 3,
        "size" : 432,
        "storageSize" : 16896,
        "totalIndexSize" : 558,
        "totalIndexStorageSize" : 33792,
        "indexDetails" : [
                {
                        "name" : "primaryKey",
                        "count" : 5,
                        "size" : 432,
                        "avgObjSize" : 86.4,
                        "storageSize" : 16896,
                        "pageSize" : 4194304,
                        "readPageSize" : 65536,
                        "fanout" : 16,
                        "compression" : "quicklz",
                        "queries" : 0,
                        "nscanned" : 0,
                        "nscannedObjects" : 0,
                        "inserts" : 0,
                        "deletes" : 0
                },
                {
                        "name" : "_id_",
                        "count" : 5,
                        "size" : 271,
                        "avgObjSize" : 54.2,
                        "storageSize" : 16896,
                        "pageSize" : 4194304,
                        "readPageSize" : 65536,
                        "fanout" : 16,
                        "compression" : "quicklz",
                        "queries" : 0,
                        "nscanned" : 0,
                        "nscannedObjects" : 0,
                        "inserts" : 0,
                        "deletes" : 0
                },
                {
                        "name" : "x_1_y_1",
                        "count" : 5,
                        "size" : 287,
                        "avgObjSize" : 57.4,
                        "storageSize" : 16896,
                        "pageSize" : 4194304,
                        "readPageSize" : 8192,
                        "fanout" : 16,
                        "compression" : "zlib",
                        "queries" : 0,
                        "nscanned" : 0,
                        "nscannedObjects" : 0,
                        "inserts" : 0,
                        "deletes" : 0
                }
        ],
        "ok" : 1
}
command beginLoad
{
  beginLoad: 1,
  ns:        '<string>',
  indexes:   [<indexspec>, ...],
  options:   <document>
}

Arguments:

Field Type Description
ns string The name of the collection to create (without the dbname. prefix).
indexes array of documents An array of all indexes to create. Each element should be of the same form as the documents in the system.indexes collection.
options document Creation options for the collection, as they would be specified to db.createCollection(), described in Collection Options.

Supported since 1.1.0

Creates the collection ns in a special bulk loading mode. In this mode, the connection that ran beginLoad may send multiple insert operations, followed by either commitLoad or abortLoad. Other connections that try to use this collection will be rejected until the load is complete.

Example:

> db.beginTransaction()
{ "status" : "transaction began", "ok" : 1 }
> db.runCommand({beginLoad: 1,
...              ns:        'foo',
...              indexes: [{key:          {x: 1, y: 1},
...                         ns:           'loader.foo',
...                         name:         'x_1_y_1',
...                         readPageSize: '8k'}],
...              options: {compression: 'quicklz',
...                        primaryKey:  {a: 1, x: 1, _id: 1}}})
{ "status" : "load began", "ok" : true }
command commitLoad
{
  commitLoad: 1
}

Supported since 1.1.0

Commits the current bulk load in progress for this client connection. This includes the work of building all indexes for the collection, and it will block until that work is complete.

After this command returns, you should run commitTransaction to make the collection visible to other client connections.

command abortLoad
{
  abortLoad: 1
}

Supported since 1.1.0

Aborts the current bulk load in progress for this client connection. This removes the collection from the database, as if by db.collection.drop() and destroys all temporary state created by the loader.

If the client connection times out while a loader is active, the bulk load is automatically aborted.

5.1.3. Partitioned Collections Commands

command addPartition
{
  addPartition: '<collection>',
  newMax:       <document>
}

Arguments:

Field Type Description
addPartition string The collection to which to add a partition.
newMax document (optional) Identifies what the maximum key of the current last partition should become before adding a new partition. This must be greater than any existing key in the collection. If newMax is not passed in, then the maximum key of the current last partition will be set to the key of the last element that currently exists in it.

Supported since 1.5.0

Add a partition to a Partitioned Collections. This command is used for Adding a Partition.

In the mongo shell, there is a helper function db.collection.addPartition([newMax]) that wraps this command.

Example:

rs0:PRIMARY> db.runCommand({getPartitionInfo: 'foo'})
{
    "numPartitions" : NumberLong(1),
    "partitions" : [
        {
            "_id" : NumberLong(0),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-17T21:14:35.040Z")
        }
    ],
    "ok" : 1
}
rs0:PRIMARY> db.foo.insert({_id:10})
rs0:PRIMARY> db.runCommand({addPartition: 'foo'})
{ "ok" : 1 }
rs0:PRIMARY> db.runCommand({getPartitionInfo: 'foo'})
{
    "numPartitions" : NumberLong(2),
    "partitions" : [
        {
            "_id" : NumberLong(0),
            "max" : {
                "_id" : 10
            },
            "createTime" : ISODate("2014-06-17T21:14:35.040Z")
        },
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-17T21:15:59.156Z")
        }
    ],
    "ok" : 1
}
rs0:PRIMARY> db.runCommand({addPartition: 'foo', newMax: {_id : 20}})
{ "ok" : 1 }
rs0:PRIMARY> db.runCommand({getPartitionInfo: 'foo'})
{
    "numPartitions" : NumberLong(3),
    "partitions" : [
        {
            "_id" : NumberLong(0),
            "max" : {
                "_id" : 10
            },
            "createTime" : ISODate("2014-06-17T21:14:35.040Z")
        },
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : 20
            },
            "createTime" : ISODate("2014-06-17T21:15:59.156Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-17T21:16:17.871Z")
        }
    ],
    "ok" : 1
}
command dropPartition
{
  dropPartition: '<collection>',
  id:            <number>
}

Arguments:

Field Type Description
dropPartition string The collection to get partition info from.
id number (optional) The id of the partition to be dropped. Partition ids may be identified by running getPartitionInfo. Note that while optional, either id or max must be present, but not both.
max document (optional) Specifies the maximum partition key for dropping such that all partitions with documents less than or equal to max are dropped. Note that while optional, either id or max must be present, but not both.

Supported since 1.5.0

Drop the partition given a partition id. This command is used for Dropping a Partition of a Partitioned Collections.

In the mongo shell, there is a helper function db.collection.dropPartition([id]) that wraps this command for the case where a partition id is specified. Similarly, there is also a helper function, db.collection.dropPartitionsLEQ([max]) that wraps this command for the case where a max partition key is specified.

Example:

rs0:PRIMARY> db.runCommand({getPartitionInfo: 'foo'})
{
    "numPartitions" : NumberLong(3),
    "partitions" : [
        {
            "_id" : NumberLong(0),
            "max" : {
                "_id" : 10
            },
            "createTime" : ISODate("2014-06-17T21:04:39.241Z")
        },
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : 20
            },
            "createTime" : ISODate("2014-06-17T21:04:46.377Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-17T21:04:48.704Z")
        }
    ],
    "ok" : 1
}
rs0:PRIMARY> db.runCommand({dropPartition: 'foo', id: 0})
{ "ok" : 1 }
rs0:PRIMARY> db.runCommand({getPartitionInfo: 'foo'})
{
    "numPartitions" : NumberLong(2),
    "partitions" : [
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : 20
            },
            "createTime" : ISODate("2014-06-17T21:04:46.377Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-17T21:04:48.704Z")
        }
    ],
    "ok" : 1
}
command getPartitionInfo
{
  getPartitionInfo: '<collection>'
}

Arguments:

Field Type Description
getPartitionInfo string The collection to get partition info from.

Supported since 1.5.0

Retrieve the list of partitions for a partitioned collection. This command provides Information About Partitioned Collections.

In the mongo shell, there is a helper function db.collection.getPartitionInfo() that wraps this command.

Example:

> db.runCommand({getPartitionInfo: 'foo'})
{
    "numPartitions" : NumberLong(2),
    "partitions" : [
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : 20
            },
            "createTime" : ISODate("2014-06-17T21:04:46.377Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-17T21:04:48.704Z")
        }
    ],
    "ok" : 1
}

5.1.4. Parameter Commands

Percona TokuMX provides the ability to view and change several of the Server Parameters at runtime, through the commands getParameter and setParameter.

command getParameter

View the value of a server parameter at runtime.

{
  getParameter: 1,
  <option>: 1
}

See also getParameter in the MongoDB documentation.

In the Percona TokuMX shell, there is also a wrapper function for getParameter: db.getParameter(name).

Example: The syntax to view the server parameter checkpointPeriod in the shell is:

db.getParameter('checkpointPeriod')
command setParameter

Modify a server parameter at runtime.

{
  setParameter: 1,
  <option>: <value>
}

See also setParameter in the MongoDB documentation.

In the Percona TokuMX shell, there is also a wrapper function for setParameter: db.setParameter(name, value).

Example: The syntax to modify the server parameter checkpointPeriod to 120 in the shell is:

db.setParameter('checkpointPeriod', 120)

Note

Modifying a parameter returns the pre-existing value.

5.1.5. Locking Commands

Percona TokuMX has some commands for controlling and viewing the behavior of both Metadata Locks and Document-level Locks.

command setClientLockTimeout
{
  setClientLockTimeout: <number>
}

Arguments:

Field Type Description
setClientLockTimeout number New value for this client’s lock timeout (in milliseconds).

Supported since 1.5.0

The lockTimeout (used for Document-level Locks) can be changed for each individual client connection, if needed.

Returns the old value for this connection.

Example:

> db.runCommand({setClientLockTimeout: 10000})
{ "was" : 4000, "ok" : 1 }
command setWriteLockYielding
{
  setWriteLockYielding: <boolean>
}

Arguments:

Field Type Description
setWriteLockYielding Boolean New value for this client’s write lock yielding setting.

Supported since 1.5.0

The forceWriteLocks setting can be controlled (for read locks) for each individual client connection, if needed. This affects the behavior of per-database Metadata Locks.

The default value for each new connection is the same as the value of forceWriteLocks. If this setting is true for a particular connection, then that connection’s read locks will yield to any pending write locks.

Example:

> db.runCommand({setWriteLockYielding: true})
{ "was" : false, "ok" : 1 }
command showLiveTransactions
{
  showLiveTransactions: 1,
  cursor:       <document>
}

Arguments:

Field Type Description
cursor document (optional) If present, requests that the command return a cursor. The cursor allows more results to be returned. The cursor document may specify options that control the creation of the cursor document.

Supported since 1.2.1

Lists all live transactions, and the Document-level Locks each one currently holds. The reported information is described in db.showLiveTransactions().

command showPendingLockRequests
{
  showPendingLockRequests: 1,
  cursor:                  <document>
}

Arguments:

Field Type Description
cursor document (optional) If present, requests that the command return a cursor. The cursor allows more results to be returned. The cursor document may specify options that control the creation of the cursor document.

Supported since 1.2.1

Lists all pending requests for Document-level Locks. The reported information is described in db.showPendingLockRequests().

5.1.6. Replication Commands

command replAddPartition
{
  replAddPartition: 1
}

Supported since 1.4.0

Adds a partition to the oplog.rs and oplog.refs collection.

Returns an error if the current last partition has no oplog entries, and as a result, the adding of the partition fails.

Requires authentication and cluster admin write privileges.

command replGetExpireOplog
{
  replGetExpireOplog: 1
}

Supported since 1.0.4

Retrieve the values of expireOplogDays and expireOplogHours.

Requires authentication and cluster admin read privileges.

Example:

rs0:PRIMARY> db.adminCommand('replGetExpireOplog')
{ "expireOplogDays" : 14, "expireOplogHours" : 0, "ok" : 1 }
command replSetExpireOplog
{
  replSetExpireOplog: 1,
  expireOplogDays:    <number>,
  expireOplogHours:   <number>
}

Arguments:

Field Type Description
expireOplogDays number Specifies the number of days to keep oplog data.
expireOplogHours number Specifies the number of hours to keep oplog data.

Supported since 1.0.4

Set the amount of oplog data, in time, that is saved. Any oplog data older than the specified amount of time may be trimmed and therefore removed by a background thread.

Requires authentication and cluster admin write privileges.

Example:

rs0:PRIMARY> db.adminCommand({replSetExpireOplog: 1, expireOplogDays: 4, expireOplogHours: 0})
{ "ok" : 1 }
rs0:PRIMARY> db.adminCommand('replGetExpireOplog')
{ "expireOplogDays" : 4, "expireOplogHours" : 0, "ok" : 1 }
command replTrimOplog
{
  replTrimOplog: 1,
  ts:            <date>
}
// or
{
  replTrimOplog: 1,
  gtid:          <GTID>
}

Arguments:

Field Type Description
ts ISODate Timestamp to which the oplog is to be trimmed. Partitions known not to contain entries created after this timestamp.
gtid BinData GTID to which the oplog is to be trimmed. Partitions known not to contain entries greater than this GTID are dropped. The GTID must be a valid 16 byte entry that gets saved in the _id field in the oplog.

Note

Either ts or gtid must be passed in, but not both.

Supported since 1.4.0

Trim the oplog by dropping partitions up to the specified GTID or specified date. Trimming is performed by dropping partitions. Partitions that are known to not contain entries greater than the specified GTID of date are dropped.

Requires authentication and cluster admin write privileges.

Example using a date:

rs0:PRIMARY> rs.oplogPartitionInfo()
{
    "numPartitions" : NumberLong(4),
    "partitions" : [
        {
            "_id" : NumberLong(0),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAEeg==")
            },
            "createTime" : ISODate("2014-06-13T20:09:56.819Z")
        },
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFCg==")
            },
            "createTime" : ISODate("2014-06-14T20:10:01.847Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFmg==")
            },
            "createTime" : ISODate("2014-06-15T20:10:02.200Z")
        },
        {
            "_id" : NumberLong(3),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-16T20:10:02.543Z")
        }
    ],
    "ok" : 1
}
rs0:PRIMARY> db.adminCommand({replTrimOplog: 1, ts: ISODate('2014-06-14T20:10:01.847Z')})
{ "ok" : 1 }
rs0:PRIMARY> rs.oplogPartitionInfo()
{
    "numPartitions" : NumberLong(3),
    "partitions" : [
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFCg==")
            },
            "createTime" : ISODate("2014-06-14T20:10:01.847Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFmg==")
            },
            "createTime" : ISODate("2014-06-15T20:10:02.200Z")
        },
        {
            "_id" : NumberLong(3),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-16T20:10:02.543Z")
        }
    ],
    "ok" : 1
}

Example using a GTID:

rs0:PRIMARY> rs.oplogPartitionInfo()
{
    "numPartitions" : NumberLong(3),
    "partitions" : [
        {
            "_id" : NumberLong(1),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFCg==")
            },
            "createTime" : ISODate("2014-06-14T20:10:01.847Z")
        },
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFmg==")
            },
            "createTime" : ISODate("2014-06-15T20:10:02.200Z")
        },
        {
            "_id" : NumberLong(3),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-16T20:10:02.543Z")
        }
    ],
    "ok" : 1
}
rs0:PRIMARY> db.runCommand({replTrimOplog:1, gtid : BinData(0,"AAAAAAAAAAEAAAAAAAAFCg==")})
{ "ok" : 1 }
rs0:PRIMARY> rs.oplogPartitionInfo()
{
    "numPartitions" : NumberLong(2),
    "partitions" : [
        {
            "_id" : NumberLong(2),
            "max" : {
                "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAFmg==")
            },
            "createTime" : ISODate("2014-06-15T20:10:02.200Z")
        },
        {
            "_id" : NumberLong(3),
            "max" : {
                "_id" : { "$maxKey" : 1 }
            },
            "createTime" : ISODate("2014-06-16T20:10:02.543Z")
        }
    ],
    "ok" : 1
}

5.1.7. Plugin Commands

Plugins are dynamically loadable modules that add extra functionality to Percona TokuMX, for example, Hot Backup. These commands are used to control which plugins are loaded into the server.

command listPlugins
{
  listPlugins: 1
}

Supported since 1.1.0

Lists which plugins are currently loaded, and information about them.

Example:

> db.adminCommand('listPlugins')
{
    "plugins" : [
        {
            "filename" : "/opt/tokumx/lib64/plugins/libbackup_plugin.so",
            "fullpath" : "/opt/tokumx/lib64/plugins/libbackup_plugin.so",
            "name" : "backup_plugin",
            "version" : "tokubackup 1.1 $Revision: 56100 $",
            "checksum" : "688f63cb3018caa3efb74ff829ee3568",
            "commands" : [
                "backupStart",
                "backupThrottle",
                "backupStatus"
            ]
        }
    ],
    "ok" : 1
}
command _loadPlugin
{
  loadPlugin: '<name>',
  checksum:   '<string>'
}

Arguments:

Field Type Description
loadPlugin string Name of the plugin to be loaded (searches for libname.so)
checksum string (optional) Checksum of the plugin to verify (aborts loading if the checksum doesn’t match).

Supported since 1.1.0

Loads a new plugin by name. The pluginsDir is searched for a file named libname.so, and if such a file is found, it is loaded as a plugin.

If checksum is provided, the plugin’s checksum is verified before it is loaded.

If the plugin is successfully loaded, its information is returned, just as would be reported by listPlugins.

Example:

> db.adminCommand({loadPlugin: 'backup_plugin'})
{
    "loaded" : {
        "filename" : "/opt/tokumx/lib64/plugins/libbackup_plugin.so",
        "fullpath" : "/opt/tokumx/lib64/plugins/libbackup_plugin.so",
        "name" : "backup_plugin",
        "version" : "tokubackup 1.1 $Revision: 56100 $",
        "checksum" : "688f63cb3018caa3efb74ff829ee3568",
        "commands" : [
            "backupStart",
            "backupThrottle",
            "backupStatus"
        ]
    },
    "ok" : 1
}
command unloadPlugin
{
  unloadPlugin: '<string>'
}

Arguments:

Field Type Description
unloadPlugin string Name of the plugin to unload (see loadPlugin).

Supported since 1.1.0

Unloads the named plugin, removing its functionality from the server.

Example:

> db.adminCommand({unloadPlugin: 'backup_plugin'})
{ "ok" : 1 }

5.2. Hot Backup Commands

These commands are part of the Hot Backup component available only in TokuMX Enterprise Edition.

command backupStart
Field:destination, the directory where the backup files will reside.
Type:string
{
backupStart: '<destination>'
}

Runs a Hot Backup. This copies the dbpath to destination online, and leaves the files with contents identical to what was committed to disk at the moment the backupStart command returns.

Note

For more information about how backup works, see Hot Backup.

Returns an error if there is already another backup operation running.

The backup destination must be a directory that exists, and should not be a subdirectory of dbpath.

If a separate logDir is used from dbpath, then destination will contain two directories, data (containing the contents of dbpath) and log (containing the contents of logDir).

Note

Since Hot Backup copies data recursively, if logDir is a subdirectory of dbpath, all data is copied directly in to destination.

Example:

> var d = new Date()
> var month = (d.getMonth() < 9 ? '0' : '') + (d.getMonth() + 1)
> var backupName = 'tokumx-' + d.getFullYear() + month + d.getDate()
> db.runCommand({backupStart: '/mnt/backup/' + backupName})
{ "ok" : 1 }
command backupStatus
{
backupStatus: 1
}

Queries the Hot Backup system for the status of a running backup operation, if one is running.

Returns an error if there is no hot backup operation in progress.

Example:

> db.runCommand('backupStatus')
{
      "percent" : 22.522784769535065,
      "bytesDone" : NumberLong(16875520),
      "files" : {
              "done" : 5,
              "total" : 20
      },
      "current" : {
              "source" : "/var/lib/tokumx/log000000000004.tokulog27",
              "dest" : "/mnt/backup/tokumx-demo/log000000000004.tokulog27",
              "bytes" : {
                      "done" : NumberLong(16777216),
                      "total" : NumberLong(57805156)
              }
      },
      "ok" : 1
}
command backupThrottle
Field:rate
Type:integer or string (bytes)
{
backupThrottle: <rate>
}

The rate (bytes per second) at which the Hot Backup system will use I/O to copy files, ignoring client write activity. May use “K/M/G” suffix as a string.

The Hot Backup system uses I/O in two ways: for mirroring writes, and for copying files (see Concepts for more details). Mirrored writes must be completed immediately, but file copying can be slow.

This command controls how much I/O (in bytes per second) is used for file copying. By default, backups do not limit themselves this way, but throttling the backup operation can help reduce the impact on a running server.

Example:

> db.runCommand({backupThrottle: '10MB'})
{ "ok" : 1 }

5.3. Point in Time Recovery Commands

This command is part of the Point in Time Recovery component available only in Percona TokuMX Enterprise Edition.

command recoverToPoint
{
  recoverToPoint: 1,
  ts:             <date>
}
// or
{
  recoverToPoint: 1,
  gtid:           <GTID>
}
Field Type Description
ts ISODate Timestamp to which the server is to be recovered.
gtid BinData GTID to which the server is to be recovered.

Supported since 2.0.0

Runs Point in Time Recovery. This syncs and applies all entries from another replica set member’s oplog up to the provided timestamp or GTID.

Note

For more information about how point in time recovery works, see Point in Time Recovery.

The server must be a member of a replica set, and must be in maintenance mode. To bring up a server in maintenance mode (to make sure it doesn’t sync anything immediately on startup), use the server parameter rsMaintenance.

Warning

Do not run multiple instances of recoverToPoint concurrently.

Example:

rs0:RECOVERING> db.runCommand({recoverToPoint: 1, gtid: GTID(1, 152)})
{ "ok" : 1 }

5.3.1. Administrative Commands

command checkpoint
{
  checkpoint: 1
}

Forces Percona TokuMX to run a checkpoint immediately, rather than waiting for the checkpointPeriod timer to expire.

command engineStatus
{
  engineStatus: 1
}

Retrieves raw status information from the Fractal Tree indexing engine. This is generally for diagnostic/development use only. This information is aggregated and presented in a more user-friendly form in db.serverStatus(), more details of TokuMX-specific information there is available in the Server Status section.

5.3.2. Internal-only Commands

command _collectionsExist

Supported since 1.1.0

Internal use only.

command _migrateStartCloneTransaction

Supported since 1.4.0

Internal use only.

command clonePartitionInfo

Supported since 1.5.0

Internal use only.

command logReplInfo

Supported since 1.3.2

Internal use only.

command replUndoOplogEntry

Supported since 1.3.2

Internal use only.

command showSizes

Supported since 1.5.0

Internal use only.

command updateSlave

Internal use only.

5.4. Deprecated Commands

  • clean

    Deprecated internal MongoDB command.

  • cloneCollectionAsCapped

    Capped collections are deprecated in favor of Partitioned Collections.

  • closeAllDatabases

    Deprecated internal MongoDB command.

  • collMod

    Percona TokuMX Fractal Tree indexes do not suffer the fragmentation problems of MongoDB’s data storage, so the powerOf2Sizes option is deprecated, as well as this command. TTL indexes are also deprecated in favor of Partitioned Collections.

  • compact

    Percona TokuMX Fractal Tree indexes do not fragment nor do they corrupt themselves in the way that MongoDB’s indexes do, so this command is unneeded.

  • convertToCapped

    Capped collections are deprecated in favor of Partitioned Collections.

  • godinsert

    Deprecated internal MongoDB command.

  • journalLatencyTest

    The Percona TokuMX transaction log is different from the MongoDB journal and does not need this command.

  • logRotate

    You should instead use SIGUSR1 to rotate logs.

  • repairDatabase

    Percona TokuMX Fractal Tree indexes do not fragment nor do they corrupt themselves in the way that MongoDB’s indexes do, so this command is unneeded.

  • validate

    Percona TokuMX Fractal Tree indexes are different from MongoDB’s B-tree indexes and do not need to be validated the same way.