Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issues #38 : processFullCollection() is triggered #45

Closed
wants to merge 1 commit into from

Conversation

xma
Copy link
Contributor

@xma xma commented Dec 14, 2012

now it trigger processFullCollection
see: #38

@buildhive
Copy link

@xma
Copy link
Contributor Author

xma commented Dec 17, 2012

I'm using replica - it's a requirement for this river.

xavier@[~]$ mongo
MongoDB shell version: 2.2.0
connecting to: test
mbxm1:PRIMARY> rs.conf()
{
"_id" : "mbxm1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}

When i'd run elasticsearch with older release of the plugin ( elasticsearch data folder removed )
(and run a shell script to create index on river )

[2012-12-17 14:52:56,726][INFO ][river.mongodb ] [Miss America] [mongodb][sam_media] starting mongodb stream: host [127.0.0.1], port [27017], gridfs [false], filter [sam_dev], db [sam_media], indexing to [topic_medias]/[{}]
[2012-12-17 14:52:56,765][INFO ][cluster.metadata ] [Miss America] [sam_posts] creating index, cause [api], shards [5]/[1], mappings []
[2012-12-17 14:52:56,993][INFO ][cluster.metadata ] [Miss America] [sam_posts] create_mapping [topic_posts]
[2012-12-17 14:52:57,036][INFO ][cluster.metadata ] [Miss America] [_river] update_mapping sam_posts
[2012-12-17 14:52:57,108][INFO ][river.mongodb ] [Miss America] [mongodb][sam_posts] starting mongodb stream: host [127.0.0.1], port [27017], gridfs [false], filter [sam_dev], db [sam_posts], indexing to [topic_posts]/[{}]
[2012-12-17 14:52:57,132][INFO ][river.mongodb ] [Miss America] [mongodb][sam_media] No known previous slurping time for this collection
[2012-12-17 14:52:57,145][INFO ][cluster.metadata ] [Miss America] [sam_syncs] creating index, cause [api], shards [5]/[1], mappings []
[2012-12-17 14:52:57,376][INFO ][cluster.metadata ] [Miss America] [_river] update_mapping sam_media
[2012-12-17 14:52:57,394][INFO ][river.mongodb ] [Miss America] [mongodb][sam_posts] No known previous slurping time for this collection
[2012-12-17 14:52:57,534][INFO ][cluster.metadata ] [Miss America] [sam_syncs] create_mapping [topic_syncs]
[2012-12-17 14:52:57,547][INFO ][cluster.metadata ] [Miss America] [_river] update_mapping sam_posts
[2012-12-17 14:52:57,563][INFO ][cluster.metadata ] [Miss America] [_river] update_mapping sam_syncs
[2012-12-17 14:52:57,570][INFO ][river.mongodb ] [Miss America] [mongodb][sam_syncs] starting mongodb stream: host [127.0.0.1], port [27017], gridfs [false], filter [sam_dev], db [sam_syncs], indexing to [topic_syncs]/[{}]

at the documents are indexed, in ES.

With current River, same operations result in 0 documents.

Regards,

-xavier

@richardwilly98
Copy link
Owner

Hi,

Could you please send me your river settings and a export of your
oplog.rscollection (if the data are not confidential) or a small js to
import some
data in MongoDB?

Thanks,
Richard.

On Mon, Dec 17, 2012 at 9:04 AM, Xavier notifications@github.com wrote:

I'm using replica - it's a requirement for this river.

xavier@[~]$ mongo
MongoDB shell version: 2.2.0
connecting to: test
mbxm1:PRIMARY> rs.conf()
{
"_id" : "mbxm1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}

When i'd run elasticsearch with older release of the plugin (
elasticsearch data folder removed )
(and run a shell script to create index on river )

[2012-12-17 14:52:56,726][INFO ][river.mongodb ] [Miss America]
[mongodb][sam_media] starting mongodb stream: host [127.0.0.1], port
[27017], gridfs [false], filter [sam_dev], db [sam_media], indexing to
[topic_medias]/[{}]
[2012-12-17 14:52:56,765][INFO ][cluster.metadata ] [Miss America]
[sam_posts] creating index, cause [api], shards [5]/[1], mappings []
[2012-12-17 14:52:56,993][INFO ][cluster.metadata ] [Miss America]
[sam_posts] create_mapping [topic_posts]
[2012-12-17 14:52:57,036][INFO ][cluster.metadata ] [Miss America]
[_river] update_mapping sam_posts http://dynamic
[2012-12-17 14:52:57,108][INFO ][river.mongodb ] [Miss America]
[mongodb][sam_posts] starting mongodb stream: host [127.0.0.1], port
[27017], gridfs [false], filter [sam_dev], db [sam_posts], indexing to
[topic_posts]/[{}]
[2012-12-17 14:52:57,132][INFO ][river.mongodb ] [Miss America]
[mongodb][sam_media] No known previous slurping time for this collection
[2012-12-17 14:52:57,145][INFO ][cluster.metadata ] [Miss America]
[sam_syncs] creating index, cause [api], shards [5]/[1], mappings []
[2012-12-17 14:52:57,376][INFO ][cluster.metadata ] [Miss America]
[_river] update_mapping sam_media http://dynamic
[2012-12-17 14:52:57,394][INFO ][river.mongodb ] [Miss America]
[mongodb][sam_posts] No known previous slurping time for this collection
[2012-12-17 14:52:57,534][INFO ][cluster.metadata ] [Miss America]
[sam_syncs] create_mapping [topic_syncs]
[2012-12-17 14:52:57,547][INFO ][cluster.metadata ] [Miss America]
[_river] update_mapping sam_posts http://dynamic
[2012-12-17 14:52:57,563][INFO ][cluster.metadata ] [Miss America]
[_river] update_mapping sam_syncs http://dynamic
[2012-12-17 14:52:57,570][INFO ][river.mongodb ] [Miss America]
[mongodb][sam_syncs] starting mongodb stream: host [127.0.0.1], port
[27017], gridfs [false], filter [sam_dev], db [sam_syncs], indexing to
[topic_syncs]/[{}]

at the documents are indexed, in ES.

With current River, same operations result in 0 documents.

Regards,

-xavier


Reply to this email directly or view it on GitHubhttps://github.com//pull/45#issuecomment-11442554.

@xma
Copy link
Contributor Author

xma commented Dec 17, 2012

thanks a lot for your help, i've sent you an email.

regards,

-xavier

@xma xma mentioned this pull request Dec 18, 2012
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants