A problem we may often encounter is to dynamically update some values of the database according to certain conditions, such as the following code

 updateInfo := map[string]int64{}
    for name, count := range updateInfo {
            coll.Update(bson.M{"name": name}, bson.M{"$inc": bson.M{"count": count}})
    }

Writing in this way will inevitably cause a lot of pressure on the database io, a lot of network io, and a decrease in system processing performance.


In fact, a bulkWrite operation has been added in mongodb3.2, which supports us to write, update and delete data in batches;

 db.collection.bulkWrite(
   [ <operation 1>, <operation 2>, ... ],
   {
      writeConcern : <document>,
      ordered : <boolean>
   }
)

Among them ordered is a place to pay attention, according to the official description:
The default is true, that is to insert data in order, if there is an error in the middle, it will not continue to execute. If it is false, mongo will insert data in a concurrent way, and an error in the middle will have no effect on subsequent operations


So the above code can be modified like this, then only one io can solve the above situation that requires multiple operations on the database

 updateInfo := map[string]int64{}

    bulk := coll.Bulk()
    for name, count := range updateInfo {
            bulk.Update(bson.M{"name": name}, bson.M{"$inc": bson.M{"count": count}})
    }
    bulk.Unordered()
    bulk.Run()

当然,在bulk 7ea96a8e478cdae60bb2ca5424f8c646---中间的操作同时可以支持insert , update , upsert , delete命令, Do your own research if necessary

 db.collection.bulkWrite(
   [
      { insertOne : <document> },
      { updateOne : <document> },
      { updateMany : <document> },
      { replaceOne : <document> },
      { deleteOne : <document> },
      { deleteMany : <document> }
   ],
   { ordered : false }
)

Document address, click here


youyu岁月
489 声望45 粉丝

不要用执行上的勤奋来掩盖思考上的懒惰