MySQL DBAs know that integrating MySQL and a big data solution can be challenging. That’s why I invite you to join me this Wednesday (Oct. 2) at 10 a.m. Pacific time for a free webinar in which I’ll walk you through how to implement a successful big data strategy with Apache Hadoop and MySQL. This webinar is specifically tailored for MySQL DBAs and developers (or any person with a previous MySQL experience) who wants to know about how to use Apache Hadoop together with MySQL for Big Data.
The webinar is titled, “Implementing MySQL and Hadoop for Big Data,” and you can register here.
Storing Big Data in MySQL alone can be challenging:
- Single MySQL instance may not scale enough to store hundreds or terabyte or even a petabyte of data.
- “Sharding” MySQL is a common approach, however, it can be hard to implement.
- Indexes for terabytes of data may be a problem (updating index of that size can slow down the insert significantly).
Apache Hadoop together with MySQL can solve many big data challenges. In the webinar I will present:
- And introduction to Apache Hadoop and its components including HFDS, Map/Reduce, Hive/Impala, Flume, and Scoop
- What are the common application for Apache Hadoop
- How to integrate Hadoop and MySQL using Sqoop and MySQL Applier for Hadoop.
- Clickstream logs statistical analysis and other examples of big data implementation
- ETL and ELT process with Hadoop and MySQL
- Star Schema implementation example for Hadoop
- Star Schema Benchmark results with Cloudera Impala and columnar storage.
I look forward the webinar and hope to see you there! Additionally, if you have questions in advance, please also ask those below, too.