Mistakes can happen. If only we could go back in time to the very second before that mistake was made.
Plain text version for those who cannot run the asciicast above:
|
1 |
akira@perc01:/data$ #OK, let's get this party started!<br>akira@perc01:/data$ # The frontend has been shut down for 20 mins so they can<br>akira@perc01:/data$ # update that part, and I can update the schema in he <br>akira@perc01:/data$ # backend simultaneously.<br>akira@perc01:/data$ #Easy-peasy ...<br>akira@perc01:/data$ date<br>Tue Jul 2 13:34:09 JST 2019<br>akira@perc01:/data$ #Just set my auth details.(NO PEEKING!)<br>akira@perc01:/data$ conn_args="--host localhost:27017 --username akira --password secret --authenticationDatabase admin"<br>akira@perc01:/data$ mongo ${conn_args} --quiet<br>testrs:PRIMARY> use payments<br>switched to db payments<br>testrs:PRIMARY> show collections<br>TheImportantCollection<br>testrs:PRIMARY> //Ah, there it is. Time to work!<br>testrs:PRIMARY> db.TheImportantCollection.count()<br>174662<br>testrs:PRIMARY> db.TheImportantCollection.findOne()<br>{<br> "_id" : 0,<br> "customer" : {<br> "fn" : "Smith",<br> "gn" : "Ken",<br> "city" : "Georgevill",<br> "street1" : "1 Wishful St.",<br> "postcode" : "45031"<br> },<br> "order_ids" : [ ]<br>}<br>testrs:PRIMARY> //Ah, there it is. The "customer" object that has the <br>testrs:PRIMARY> //address fields in it. We're going to move those out.<br>testrs:PRIMARY> //Copy the whole collection, adding the new "addresses" array<br>testrs:PRIMARY> var counter = 0;<br>testrs:PRIMARY> db.TheImportantCollection.find().forEach(function(d) {<br>... d["adresses"] = [ ];<br>... db.TheImportantCollectionV2.insert(d);<br>... counter += 1;<br>... if (counter % 25000 == 0) { print(counter + " updates done"); }<br>... });<br>25000 updates done<br>50000 updates done<br>75000 updates done<br>100000 updates done<br>125000 updates done<br>150000 updates done<br>testrs:PRIMARY> //Cool. Let's look at the temp table<br>testrs:PRIMARY> db.TheImportantCollectionV2.findOne()<br>{<br> "_id" : 0,<br> "customer" : {<br> "fn" : "Smith",<br> "gn" : "Ken",<br> "city" : "Georgevill",<br> "street1" : "1 Wishful St.",<br> "postcode" : "45031"<br> },<br> "order_ids" : [ ],<br> "adresses" : [ ]<br>}<br>testrs:PRIMARY> //?AH!!<br>testrs:PRIMARY> //typo. I misspelled "addresses".<br>testrs:PRIMARY> //I'll just drop this and go again<br>testrs:PRIMARY> db.TheImportantCollectionV2.remove({})<br>WriteResult({ "nRemoved" : 174662 })<br>testrs:PRIMARY> //ooops. Why did I bother deleting the docs?<br>testrs:PRIMARY> //I need to *drop* the collection<br>testrs:PRIMARY> db.TheImportantCollection.drop()<br>true<br>testrs:PRIMARY> //!!!!<br>testrs:PRIMARY> //Wait!<br>testrs:PRIMARY> show collections<br>TheImportantCollectionV2<br>testrs:PRIMARY> //...<br>testrs:PRIMARY> //I've done a bad thing ....<br>testrs:PRIMARY> //Let me see<br>testrs:PRIMARY> //in the oplog<br>testrs:PRIMARY> use local<br>switched to db local<br>testrs:PRIMARY> db.oplog.rs.findOne({"o.drop": "TheImportantCollection"})<br>{<br> "ts" : Timestamp(1562042272, 1),<br> "t" : NumberLong(6),<br> "h" : NumberLong("6726633412398410781"),<br> "v" : 2,<br> "op" : "c",<br> "ns" : "payments.$cmd",<br> "ui" : UUID("abc9c1f9-71c0-45ea-aeba-ea239b975a95"),<br> "wall" : ISODate("2019-07-02T04:37:52.171Z"),<br> "o" : {<br> "drop" : "TheImportantCollection"<br> }<br>}<br>testrs:PRIMARY> //AH. 1562042272, you are the worst unix epoch second of my<br>testrs:PRIMARY> // life.<br>testrs:PRIMARY> <br> |
Plain text version for those who cannot run the asciicast above:
|
1 |
akira@perc01:/data$ #OK, OK, this is bad. I dropped TheImportantCollection<br>akira@perc01:/data$ #Breathe. Breathe Akira.<br>akira@perc01:/data$ #Right! Backups!<br>akira@perc01:/data$ #I have backups!<br>akira@perc01:/data$ ls /backups/<br>20190624_2300 20190626_2300 20190628_2300<br>20190625_2300 20190627_2300 20190629_2300<br>akira@perc01:/data$ #OK, I have one from 23:00 JST ... which is a while ago.<br>akira@perc01:/data$ #I can use the latest backup, then roll forward from<br>akira@perc01:/data$ # there using this neat thing you can do with<br>akira@perc01:/data$ # mongorestore (the standard mongo utils command)<br>akira@perc01:/data$ #You can replay a dumped oplog bson file <br>akira@perc01:/data$ # on a primary like it was receiving as a secondary<br>akira@perc01:/data$ #Just as a secondary can catch up from a primary so<br>akira@perc01:/data$ # far the oplog window of time goes, a primary can<br>akira@perc01:/data$ # be given an oplog history to replay, using this 'trick'<br>akira@perc01:/data$ #(Not really a trick, but let's call it that)<br>akira@perc01:/data$ <br>akira@perc01:/data$ #<br>akira@perc01:/data$ #But, before doing ANYTHING with the backups,<br>akira@perc01:/data$ # get a full dump of the oplog of the *live* replicaset<br>akira@perc01:/data$ # first<br>akira@perc01:/data$ conn_args="--host localhost:27017 --username akira --password secret --authenticationDatabase admin"<br>akira@perc01:/data$ mongodump ${conn_args} -d local -c oplog.rs --out /data/oplog_dump_full<br>2019-07-02T13:50:02.713+0900 writing local.oplog.rs to <br>2019-07-02T13:50:03.635+0900 done dumping local.oplog.rs (825815 documents)<br>akira@perc01:/data$ #Oh wait.<br>akira@perc01:/data$ #We *do* need a trick<br>akira@perc01:/data$ #v3.6 and v4.0 added some system collections that cause<br>akira@perc01:/data$ # mongorestore to fail, no matter what we do.<br>akira@perc01:/data$ # This is just a 3.6 and 4.0 issue hopefully, but 4.2's <br>akira@perc01:/data$ # behaviour is not known at this date.<br>akira@perc01:/data$ #I'll do the dump again, removing these two collections<br>akira@perc01:/data$ mongodump ${conn_args} -d local -c oplog.rs <br>> --query '{"ns": {"$nin": ["config.system.sessions", "config.cache.collections"]}}' --out /data/oplog_dump_full<br>2019-07-02T13:52:08.841+0900 writing local.oplog.rs to <br>2019-07-02T13:52:10.010+0900 done dumping local.oplog.rs (825781 documents)<br>akira@perc01:/data$ #So that was Trick #1. Removing those 2 specific <br>akira@perc01:/data$ # config.* collections.<br>akira@perc01:/data$ #Now for #Trick 2<br>akira@perc01:/data$ #mongodump puts the dumped oplog.rs.bson file in subdirectory "local" like that is a whole DB to restore. But you don't do a restore of local like any other DB, it doesn't work like that.<br>akira@perc01:/data$ #So we MUST get rid of subdirectory structure and just keep the single *.bson file<br>akira@perc01:/data$ ls -lR /data/oplog_dump_full/<br>/data/oplog_dump_full/:<br>total 146032<br>drwxr-xr-x 2 akira akira 57 Jul 2 13:50 local<br>-rw-r--r-- 1 akira akira 149534510 Jul 2 10:26 oplog.rs.bson<br><br>/data/oplog_dump_full/local:<br>total 233008<br>-rw-r--r-- 1 akira akira 238596091 Jul 2 13:52 oplog.rs.bson<br>-rw-r--r-- 1 akira akira 120 Jul 2 13:52 oplog.rs.metadata.json<br>akira@perc01:/data$ mv /data/oplog_dump_full/local/oplog.rs.bson /data/oplog_dump_full/<br>akira@perc01:/data$ rm -rf /data/oplog_dump_full/local<br>akira@perc01:/data$ ls -lR /data/oplog_dump_full/<br>/data/oplog_dump_full/:<br>total 233004<br>-rw-r--r-- 1 akira akira 238596091 Jul 2 13:52 oplog.rs.bson<br>akira@perc01:/data$ #OK.<br>akira@perc01:/data$ #Now let's look at this oplog. Does it go back as far as<br>akira@perc01:/data$ # the latest backup snapshot or more?<br>akira@perc01:/data$ ls /backups/ | tail -n 1<br>20190629_2300<br>akira@perc01:/data$ #By the way that is my JST timezone, not UTC<br>akira@perc01:/data$ #let's see ... check the bson file's first timestamp<br>akira@perc01:/data$ bsondump /data/oplog_dump_full/oplog.rs.bson 2>/dev/null | head -n 1<br>{"ts":{"$timestamp":{"t":1561727517,"i":1}},"h":{"$numberLong":"212971303912007811"},"v":2,"op":"n","ns":"","wall":{"$date":"2019-06-28T13:11:57.633Z"},"o":{"msg":"initiating set"}}<br>akira@perc01:/data$ #I see the epoch timestamp there: 1561727517<br>akira@perc01:/data$ date -d @1561727517<br>Fri Jun 28 22:11:57 JST 2019<br>akira@perc01:/data$ #Ah, good, that's before 20190629_2300<br>akira@perc01:/data$ #We can do a oplog replay<br>akira@perc01:/data$ #Just for sanity's sake let's look for that "drop"<br>akira@perc01:/data$ # command that is the disaster we want to avoid replaying<br>akira@perc01:/data$ bsondump /data/oplog_dump_full/oplog.rs.bson 2>/dev/null | grep drop | grep 'bTheImportantCollectionb' | tail -n 1<br>{"ts":{"$timestamp":{"t":1562042272,"i":1}},"t":{"$numberLong":"6"},"h":{"$numberLong":"6726633412398410781"},"v":2,"op":"c","ns":"payments.$cmd","ui":{"$binary":"q8nB+XHARequuuojm5dalQ==","$type":"04"},"wall":{"$date":"2019-07-02T04:37:52.171Z"},"o":{"drop":"TheImportantCollection"}}<br>akira@perc01:/data$ #Let's see it was 1562042272, the worst epoch second of my<br>akira@perc01:/data$ # my life. Let's not go there again!<br>akira@perc01:/data$ #Time to shut the live replicaset down, restore a snapshot<br>akira@perc01:/data$ # backup from 20190629_2300<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>18119 mongod -f /data/n1/mongod.conf<br>18195 mongod -f /data/n2/mongod.conf<br>18225 mongod -f /data/n3/mongod.conf<br>akira@perc01:/data$ kill 18119 18195 18225<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>18119 mongod -f /data/n1/mongod.conf<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>18119 mongod -f /data/n1/mongod.conf<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>18119 mongod -f /data/n1/mongod.conf<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>akira@perc01:/data$ #OK, shutdown<br>akira@perc01:/data$ /data/dba_scripts/our_restore_script.sh <br>usage: /data/dba_scripts/our_restore_script.sh XXXXXX<br>Choose one of these subdirectory names from /backups/:<br> 20190624_2300<br> 20190625_2300<br> 20190626_2300<br> 20190627_2300<br> 20190628_2300<br> 20190629_2300<br>akira@perc01:/data$ /data/dba_scripts/our_restore_script.sh 20190629_2300<br>Stopping mongod nodes<br>Restoring backup 20190629_2300 to one node dbpath<br>Restarting<br>about to fork child process, waiting until server is ready for connections.<br>forked process: 21776<br>child process started successfully, parent exiting<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>21776 mongod -f /data/n1/mongod.conf<br>akira@perc01:/data$ #I'll start the secondaries too<br>akira@perc01:/data$ rm -rf /data/n2/data/*<br>akira@perc01:/data$ mongod -f /data/n2/mongod.conf <br>about to fork child process, waiting until server is ready for connections.<br>forked process: 21859<br>child process started successfully, parent exiting<br>akira@perc01:/data$ rm -rf /data/n3/data/*<br>akira@perc01:/data$ mongod -f /data/n3/mongod.conf <br>about to fork child process, waiting until server is ready for connections.<br>forked process: 21896<br>child process started successfully, parent exiting<br>akira@perc01:/data$ ps -C mongod -o pid,args<br> PID COMMAND<br>21776 mongod -f /data/n1/mongod.conf<br>21859 mongod -f /data/n2/mongod.conf<br>21896 mongod -f /data/n3/mongod.conf<br>akira@perc01:/data$ #I'm going to check my important collection is there again<br>akira@perc01:/data$ mongo ${conn_args} <br>MongoDB shell version v4.0.10<br>connecting to: mongodb://localhost:27017/?authSource=admin&gssapiServiceName=mongodb<br>Implicit session: session { "id" : UUID("e5aa9b27-f26b-4c73-bdc1-bdaf494cf7ab") }<br>MongoDB server version: 4.0.10<br>testrs:PRIMARY> use payments<br>switched to db payments<br>testrs:PRIMARY> show collections<br>TheImportantCollection<br>testrs:PRIMARY> //YES<br>testrs:PRIMARY> db.TheImportantCollection.count()<br>174662<br>testrs:PRIMARY> db.TheImportantCollection.findOne()<br>{<br> "_id" : 0,<br> "customer" : {<br> "fn" : "Smith",<br> "gn" : "Ken",<br> "city" : "Georgevill",<br> "street1" : "1 Wishful St.",<br> "postcode" : "45031"<br> },<br> "order_ids" : [ ]<br>}<br>testrs:PRIMARY> //Yes yes yes ... I live<br>testrs:PRIMARY> <br>bye<br>akira@perc01:/data$ #So the data is back ... but only some time way in the<br>akira@perc01:/data$ # past. I want to replay up until ...<br>akira@perc01:/data$ bad_drop_epoch_sec=1562042272<br>akira@perc01:/data$ #Trick 3: mongorestore always expects a directory name<br>akira@perc01:/data$ #We don't need any directories, but it's just hard-coded<br>akira@perc01:/data$ # to expect one. So let's make one. Can be anywhere<br>akira@perc01:/data$ # Just not a subdirectory under the oplog dump location please, that will confuse it maybe<br>akira@perc01:/data$ mkdir /tmp/fake_empty_dir<br>mkdir: cannot create directory '/tmp/fake_empty_dir': File exists<br>akira@perc01:/data$ #Ah, I got it already.<br>akira@perc01:/data$ ls /tmp/fake_empty_dir<br>akira@perc01:/data$ mongorestore ${conn_args} <br>> --oplogReplay <br>> --oplogFile /data/oplog_dump_full/oplog.rs.bson <br>> --oplogLimit ${bad_drop_epoch_sec}:0 <br>> --stopOnError /tmp/fake_empty_dir<br>2019-07-02T14:04:35.742+0900 preparing collections to restore from<br>2019-07-02T14:04:35.742+0900 replaying oplog<br>2019-07-02T14:04:38.715+0900 oplog 5.47MB<br>2019-07-02T14:04:41.715+0900 oplog 11.0MB<br>2019-07-02T14:04:44.715+0900 oplog 16.6MB<br>2019-07-02T14:04:47.715+0900 oplog 22.2MB<br>2019-07-02T14:04:50.715+0900 oplog 27.6MB<br>2019-07-02T14:04:53.715+0900 oplog 32.8MB<br>2019-07-02T14:04:56.715+0900 oplog 37.9MB<br>2019-07-02T14:04:59.715+0900 oplog 43.0MB<br>2019-07-02T14:05:02.715+0900 oplog 48.3MB<br>2019-07-02T14:05:05.715+0900 oplog 53.9MB<br>2019-07-02T14:05:08.715+0900 oplog 59.5MB<br>2019-07-02T14:05:11.715+0900 oplog 65.1MB<br>2019-07-02T14:05:14.715+0900 oplog 70.2MB<br>2019-07-02T14:05:17.715+0900 oplog 75.0MB<br>2019-07-02T14:05:20.715+0900 oplog 79.6MB<br>2019-07-02T14:05:23.715+0900 oplog 84.1MB<br>2019-07-02T14:05:26.715+0900 oplog 88.5MB<br>2019-07-02T14:05:29.715+0900 oplog 93.0MB<br>2019-07-02T14:05:32.715+0900 oplog 97.6MB<br>2019-07-02T14:05:35.715+0900 oplog 101MB<br>2019-07-02T14:05:38.715+0900 oplog 104MB<br>2019-07-02T14:05:41.715+0900 oplog 107MB<br>2019-07-02T14:05:44.715+0900 oplog 110MB<br>2019-07-02T14:05:47.715+0900 oplog 113MB<br>2019-07-02T14:05:50.715+0900 oplog 115MB<br>2019-07-02T14:05:53.715+0900 oplog 118MB<br>2019-07-02T14:05:56.715+0900 oplog 123MB<br>2019-07-02T14:05:59.715+0900 oplog 128MB<br>2019-07-02T14:06:02.715+0900 oplog 133MB<br>2019-07-02T14:06:05.715+0900 oplog 138MB<br>2019-07-02T14:06:08.715+0900 oplog 142MB<br>2019-07-02T14:06:11.715+0900 oplog 146MB<br>2019-07-02T14:06:14.715+0900 oplog 151MB<br>2019-07-02T14:06:17.715+0900 oplog 156MB<br>2019-07-02T14:06:20.715+0900 oplog 161MB<br>2019-07-02T14:06:23.715+0900 oplog 166MB<br>2019-07-02T14:06:26.715+0900 oplog 171MB<br>2019-07-02T14:06:29.715+0900 oplog 176MB<br>2019-07-02T14:06:32.715+0900 oplog 181MB<br>2019-07-02T14:06:35.715+0900 oplog 186MB<br>2019-07-02T14:06:38.715+0900 oplog 192MB<br>2019-07-02T14:06:41.715+0900 oplog 197MB<br>2019-07-02T14:06:44.715+0900 oplog 201MB<br>2019-07-02T14:06:47.715+0900 oplog 204MB<br>2019-07-02T14:06:50.715+0900 oplog 206MB<br>2019-07-02T14:06:53.715+0900 oplog 209MB<br>2019-07-02T14:06:56.715+0900 oplog 211MB<br>2019-07-02T14:06:59.715+0900 oplog 213MB<br>2019-07-02T14:07:02.715+0900 oplog 216MB<br>2019-07-02T14:07:05.715+0900 oplog 218MB<br>2019-07-02T14:07:08.715+0900 oplog 220MB<br>2019-07-02T14:07:11.715+0900 oplog 223MB<br>2019-07-02T14:07:14.715+0900 oplog 225MB<br>2019-07-02T14:07:17.715+0900 oplog 227MB<br>2019-07-02T14:07:17.753+0900 oplog 227MB<br>2019-07-02T14:07:17.753+0900 done<br>akira@perc01:/data$ #Yay! I hope! Let's check<br>akira@perc01:/data$ mongo ${conn_args} <br>MongoDB shell version v4.0.10<br>connecting to: mongodb://localhost:27017/?authSource=admin&gssapiServiceName=mongodb<br>Implicit session: session { "id" : UUID("302f2c26-7416-4e18-bd02-1bd67626d062") }<br>MongoDB server version: 4.0.10<br>testrs:PRIMARY> use payments<br>switched to db payments<br>testrs:PRIMARY> show collections<br>TheImportantCollection<br>TheImportantCollectionV2<br>testrs:PRIMARY> //Yes! both there!<br>testrs:PRIMARY> db.TheImportantCollection.count()<br>174662<br>testrs:PRIMARY> //plus the 'V2' table I was working on when I made my <br>testrs:PRIMARY> // 'fat thumb' mistake<br>testrs:PRIMARY> //There we go, a point-in-time restore from a snapshot<br>testrs:PRIMARY> // backup + a mongorestore --oplogReplay --oplogFile<br>testrs:PRIMARY> // operation.<br>testrs:PRIMARY> //Hold on for one last trick (which I didn't have to use today)<br>testrs:PRIMARY> // Trick #4: ultimate permissions are sometimes needed.<br>testrs:PRIMARY> // The config.system.sessions and config.transactions(?) <br>testrs:PRIMARY> // system collections are currently unreplayable (3.6, 4.0,<br>testrs:PRIMARY> // 4.2. TBD).<br>testrs:PRIMARY> // They are not the only system collections that you can stuck on, because systems collections are mostly not covered by the "backup" and "restore" built-in roles.<br>testrs:PRIMARY> // E.g. if you are replaying updates to the admin.system.users<br>testrs:PRIMARY> // collection that will fail.<br>testrs:PRIMARY> // But you can allow if you make a *custom* role that grants<br>testrs:PRIMARY> // "anyAction" on "anyResource" (see the docs), and grant that<br>testrs:PRIMARY> // to your backup and restore user, that will make it possible for those to succeed too.<br>testrs:PRIMARY> //good night<br>testrs:PRIMARY> <br> |
The oplog of the damaged replicaset is your valuable, idempotent history if you have a backup from a recent enough time to apply it on.
mongodump connection-args --db local --collection oplog.rs
--query '{"ns": {"$nin": ["config.system.sessions", "config.transactions", "config.transaction_coordinators"]}}' argument to avoid transaction-related system collections from v3.6 and v4.0 (and maybe 4.2+ too) that can’t be restored.
bsondump oplog.rs.bson | head -n 1 to check that this oplog starts before the time of your last backup
See the ‘Act 2’ video for the details.
If you’re having the kind of disaster presented in this article I assume you are already familiar with the mongodump and mongorestore tools and MongoDB Oplog idempotency. Taking that for granted let’s go to the next level of detail.
applyOps command – Kinda secret; Actually publicIn theory you could iterate oplog documents and write an application that runs an insert command for an “i” op, an update for the “u” ops, various different commands for the “c” op, etc, but the simpler way is to submit them as they are (well almost exactly as they are) using the applyOps command, and this is what the mongorestore tool does.
The permission to run applyOps is granted to the “restore” role for all non-system collections, and there is no ‘reject if a primary’ rule. So you can make a primary apply oplog docs like a secondary does.
N.b. for some system collections, the “restore” role is not enough. See the bottom section for more details.
It might seem a bit strange users can have this privilege but without it, there would be no convenient way for dump-restore tools to guarantee consistency. The “consistency” here means all that the restored data will be exactly as it was at some point in time – the end of the dump – and not contain earlier versions of documents from some midpoint time during the dumping process.
Achieving that data consistency is why the --oplog option for mongodump was created, and why mongorestore has the matching --oplogReplay option. (Those two options should be on by default i.m.o. but they are not). The short oplog span made during a normal dump will be at <dump_directory>/oplog.rs.bson, but the --oplogFile argument lets you choose any arbitrary path.
--oplogLimitWe could have limited the oplog docs during mongodump to only include those before the disaster time with –query parameter such as the following:
mongodump ... --query '{"ts": {"$lt": new Timestamp(1560915610, 0)}}' ...
But --oplogLimit makes it easier. You can dump everything, but then use --oplogLimit <epoch_sec_value>[:<counter>] when you run mongorestore with the –oplogReplay argument.
If you’re getting confused about whether it’s UTC or your server timezone – it’s UTC. All timestamps inside MongoDB are UTC if they represent ‘wall clock’ times, and for ‘logical clocks’ timezone is a non-applicable concept.
In the built-in roles documentation, inserted after the usual and mostly fair warnings on why you should not grant users the most powerful internal role, comes this extra note that tells you what you actually need to do to allow oplog-replay updates on all system collections too:
If you need access to all actions on all resources, for example to run applyOps commands … create a user-defined role that grants anyAction on anyResource and ensure that only the users who need access to these operations have this access.
Translation: if your oplog replay fails because it hit a system collection update the “restore” role doesn’t cover, upgrade your user to be able to run with all the privileges that a secondary runs oplog replication with.
|
1 |
use admin<br>db.createRole({<br> "role": "CustomAllPowersRole", <br> "privileges": [ <br> { "resource": { "anyResource": true }, "actions": [ "anyAction" ] }, <br> ],<br> "roles": [ ] });<br>db.grantRolesToUser("<bk_and_restore_username>", [ "CustomAllPowersRole" ])<br><br>//For afterwards:<br>//use admin<br>//db.revokeRolesFromUser("<bk_and_restore_username>", [ "CustomAllPowersRole" ])<br>//db.dropRole("CustomAllPowersRole") |
Alternatively, to granting the role shown above, you could restart the mongod with security disabled; in this mode, all operations work without access control restrictions.
It’s not quite as simple as that though because transaction stuff is currently (v3.6, v4.0) throwing a spanner in the works. So I’ve found explicitly excluding config.system.sessions and config.transactions during mongodump is the best way to avoid those updates. They are logically unnecessary in a restore because the sessions/transactions finished when the replica set was completely shut down.
Resources
RELATED POSTS