Data Transfer from Hadoop to MongoDB

Hello All,

I have data sitting in HDFS in the form of Hive tables and I need to load that on a daily basis (delta load) to MongoDB.

What languages/setup/jobs/techniques I can use to achieve this reliably? Any help is highly appreciated.

Thanks,
Ravi
RavinosqlAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

DavidPresidentCommented:
Well, mongodb isn't SQL.  It has no tables or really  a schema either.   So the good news is you can easily put anything you want into it because mongodb doesn't care about such things.   Dump all the hive tables as text and stick in the date as one of the values

Or you could just use this mongodb plugin and let hadoop do this for you..

http://docs.mongodb.org/ecosystem/tools/hadoop/
0
RavinosqlAuthor Commented:
Thank you for the response!! So the data can be dumped from Hive tables  to mongo without staging in between correct? Do you know if there should be any jobs scheduled for this to happen on a daily basis? Thanks!
0
btanExec ConsultantCommented:
It should be some form of stated
Map-Reduce jobs are used to extract, transform and load data from one store to another. Hadoop can act as a complex ETL mechanism to migrate data in various forms via one or more MapReduce jobs that pull the data from one store, apply multiple transformations (applying new data layouts or other aggregation) and loading the data to another store. This approach can be used to move data from or to MongoDB, depending on the desired result.
http://docs.mongodb.org/ecosystem/use-cases/hadoop/

Some use case via the Hive query or Sparkle
I started with a simple example of taking 1 minute time series intervals of stock prices with the opening (first) price, high (max), low (min), and closing (last) price of each time interval and turning them into 5 minute intervals (called OHLC bars). The 1-minute data is stored in MongoDB and is then processed in Hive or Spark via the MongoDB Hadoop Connector, which allows MongoDB to be an input or output to/from Hadoop and Spark.
https://www.mongodb.com/blog/post/using-mongodb-hadoop-spark-part-2-hive-example?jmp=docs
http://www.mongodb.com/blog/post/using-mongodb-hadoop-spark-part-3-spark-example-key-takeaways
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
RavinosqlAuthor Commented:
Hello All...I proposed the mongo connector option but it did not well received as there are concerns over the overhead on mongo's performance over the usage of the connector.

Is anyone familiar with any solution using the Hive Metastore, spark or spring batch for this use case please help..Thanks!!
0
btanExec ConsultantCommented:
the connector has been more used instead and some shared try to fetch the data from Hive without running hiveserver which exposes a Thrift service so that you can probably save some overhead. MongoDB not being the standard relational db does limit the altenative tested means for such transfer. The connector fare better still though Sparkle can be tried but it is new and not as often preached as first option.

Do see this
I saw the appeal of Spark from my first introduction. It was pretty easy to use. It is also especially nice in that it has operations that run on all elements in a list or a matrix of data. .....
The downside is that it certainly is new and I seemed to run into a non-trival bug (SPARK-5361 now fixed in 1.2.2+) that prevented me from writing from pyspark to a Hadoop file (writing to Hadoop & MongoDB in Java & Scala should work). Also I found it hard to visualize the data as I was manipulating it. It reminded me of my college days being frustrated debugging matrices
Probably more importantly is that, once you analyze data in Hadoop, the work of reporting and operationalizing the results often need to be done. The MongoDB Hadoop Connector makes it easy to process results and put them into MongoDB, for blazing fast reporting and querying with all the benefits of an operational database.....
Overall, the benefit of the MongoDB Hadoop Connector, is combining the benefits of highly parallel analysis in Hadoop with low latency, rich querying for operational purposes from MongoDB and allowing technology teams to focus on data analysis rather than integration.
http://www.mongodb.com/blog/post/using-mongodb-hadoop-spark-part-3-spark-example-key-takeaways
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Databases

From novice to tech pro — start learning today.