Either because of a new feature, a bug, or just for archival purposes, it is often necessary to update or remove large amounts of documents in production.
The challenge with this type of operation is not only to design an efficient process query-wise, but to be able to execute it in production without debilitating the servers or causing secondaries to lag.
There are strategies that can be used to create highly controlled write processes that could run for days under the radar, getting the job done without greatly impacting your application's performance.
In this session, I'm going to share with you key points to consider when creating massive write operations in MongoDB, examples of real-life processes executed, and a few lessons learned.
Gabriel has been dedicated to databases as a DBA and consultant for the last 12 years. He has lead and participated in multiple projects across many technologies, including Oracle, MySQL, SQL Server and MongoDB. Gabriel defines himself as an automation super fan, he contributed to the development of two custom DBaaS platforms.
Gabriel holds a college degree in electronics, a degree in industrial engineering and he is currently working on his master's thesis (Information Systems engineering). He is also an GCP, Oracle and Microsoft certified professional. Currently he is an Internal Principal Consultant at Pythian specializing in MySQL and MongoDB.