Nowadays there's a lot of fuzz around big data projects and large deployments of both analytical and operational datasets and applications. In such diverse, variable and voluminous environment one might be lost on the amount of options and choices for tackling a particular use cases. More often than not, the solution passes by using the right set of tools and not so much the one size fits all traditional approach.
This talk is about how MongoDB and Hadoop can be put to work together on very challenging and demanding use cases like lambda architectures, operational + analytical workloads or even realtime immediate access + long term and raw archiving deployments.
The talk consists on 3 main topics:
The attendees will be taking home a set of real life experiences, a small demo that they can practices themselves to better understand where the 2 technologies can be of their interest and some ideas so they can explore extended usage of MongoDB with the full Hadoop stack (Spark, YARN, HDFS, Hive and PIG)
This talk is primarily oriented for development and ops teams with a small stint on use cases that might be interest for the most bizdev and architects. If you work on large datasets and operational databases, this talk is for you.
Norberto Leite is Technical Evangelist @MongoDB. Norberto has been working for the last 5 years on large scalable and distributable application environments, both as advisor and engineer. Prior to MongoDB Norberto served as BigData Engineer at Telefonica.