mongodb - More granular limits in Mongo aggregate calls? -
i have query run on mongo. looks this:
db.getcollection('termstatistics').aggregate( { $filter: { "kind": 0 } }, { $group: { "_id": "$termid", "count": {$sum: numberlong(1)} } }, { $sort: {"count": -1} }, { $limit: 100 } )
this gets count of each termid
, gives me top 100 of them. however, i've got run same query each value of kind
, of there several, because user wants top 100 each category. i'd avoid going mongo several times single query (the actual filter user-specific filtering, it's not reasonable cache results).
is possible combine these single aggregation call? kind of "limit per value of kind" or something?
edit: here sample documents. these little dumb, it's hard post thousands of documents interest them.... suppose i've got n
copies of these (with varying _id
s) each n
between 1 , 1000:
{ kind: 0, termid: n }
plus i've got n
of these, each n
between 1 , 100:
{ kind: 1, termid: n }
what want top 100 termid
each kind. kind: 0
[{ _id: 1000, count: 1000 }, ...{ _id: 901, count: 901 }]
, kind: 1
[{ _id: 200, count: 200 }, ..., { _id: 101, count: 101 }]
.
this easy takes 2 aggregate calls (see above). nice 1 aggregate call , like following:
[ { kind: 0, data: [{ _id: 1000, count: 1000 }, ...{ _id: 901, count: 901 }]}, { kind: 1, data: [{ _id: 200, count: 200 }, ...{ _id: 101, count: 101 }]}, ]
if raise limit 200, won't kind: 1
because there enough common termid
s kind: 0
, i'd need other kind of limit, or clever use of it.
hope that's more clear!
not 100% sure you're trying try grouping on "kind" field , term instead of using $match or $filter on first.
something like:
db.getcollection('termstatistics').aggregate( { $group: { "_id": { "kind": "$kind", "term": "$termid" }, "count": { $sum: numberlong(1) } } }, { $sort: {"count": -1} }, { $limit: 100 } )
Comments
Post a Comment