cursor要不要显式关闭
Mongo查询数据实现一
MongoCursor<Document> cursor = collection.find().limit(limit).iterator();
List<T> documentList = new ArrayList<>();
try {
while (cursor.hasNext()) {
T document = cursor.next();
documentList.add(document);
}
} finally {
cursor.close();
}
return documentList;
实现二
return newArrayList(collection.find().limit(limit));
实现二更方便简洁 但是没有显式关闭cursor有没问题呢?
一些结论
- 如果cursor已被遍历完(exhausted) 会自动关闭 无需显式关闭 所以实现二 没有问题
By default, the server will automatically close the cursor after 10
minutes of inactivity, or if client has exhausted the cursor.
- 如果只是遍历了部分数据 需要显式关闭
MongoCursor<Document> mongoCursor = coll.find().sort(ascending("_id")).iterator();
Document doc1 = mongoCursor.next();
// ...
mongoCursor.close(); - 在遍历过程中 同时处理其他业务逻辑 需要try catch在finally中关闭 避免中间报了异常 没有迭代完 导致cursor泄露
cursor泄露的危害
Leaving a "cursor" open is like leaving an open connection that never gets re-used. These things are not free. In fact the standard connection cost is 1MB (approx). So if you are leaving a lot of "partially iterated" cursors hanging around there is a general overhead in terms of an active connection and it's memory usage.
https://stackoverflow.com/que...
一次性查询出来 VS 逐个迭代
一次性查询出来放到list中
return newArrayList(collection.find().limit(500))
直接将500个对象放到内存中了
逐个迭代
MongoCursor<Document> cursor = collection.find().limit(500).iterator();
try {
while (cursor.hasNext()) {
T document = cursor.next();
// business logic here ...
documentList.add(document);
}
} finally {
cursor.close();
}
Mongo底层是分批查询的 先是查出101个对象 再接着查出剩下的399个对象 相比一次性查出内存中最多有399个对象
find() and aggregate() operations have an initial batch size of 101
documents by default. Subsequent getMore operations issued against the
resulting cursor have no default batch size, so they are limited only
by the 16 megabyte message size.
如果觉得399个对象还是大了 可以显式指定每批查询多少 如可以指定每批查100个 如下所示
MongoCursor<Document> mongoCursor = coll.find().limit(500).batchSize(100).iterator();
但是这会增加网络交互次数 本来默认查两次 现在变成了查5次了
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。