Previous Job
Previous
Senior Staff Engineer, Data (657692)
Ref No.: 18-11430
Location: Santa Clara, California
Start Date / End Date: 09/24/2018 to 03/22/2019
Description:
Description:
We are looking for an experienced senior data engineer with software engineering skills to help us build critical data pipelines at massive scale, architect and build our big data warehouse and data mart systems, and develop models to facilitate real-time or near real-time applications, faster operational and historical analytics
Responsibilities
Participate in the full development life cycle of Data Warehouse and Data Mart systems
Design and implement ETL frameworks for Data Warehouse system using technologies like Hive, Spark and Java
Design, build and launch scalable, extremely efficient and reliable data pipelines to move data (both large and small amounts from diverse sources) into and out of Data Warehouse and Data Mart systems
Design and implement data warehouse query engine, implement our proprietary cutting edge Hadoop-based data warehouse systems
Gather and document data mart reporting and other requirements to meet business needs
Define and promote best practices and design principles for data warehousing techniques and architecture, while improving data organization and accuracy processing through data governance framework
Monitor and troubleshoot performance issues on data warehouse and data mart servers
Desired Skills & Experience
5+ years working experience in data warehouse development and architecture. 4+ years of experience in Big Data, Data Warehouse or Large Scale Cloud systems
5+ years ETL development/operations/reporting from multiple sources, using appropriate tools
5+ years advanced SQL skills, including stored procedures, functions, indexes, and views
A track record of crafting, implementing and delivering scalable, performant data pipelines and data services
Expert level software development experience using Java and/or Python
Expertise with dimensional warehouse data models (star, snowflake schemas)
Deep knowledge and hands-on experience with relational databases (Greenplum and Oracle, MySQL/Postgres etc) and database schema design
Extensive experience with Hadoop, MapReduce/Yarn, Hive and HBase
Knowledge of automation and orchestration platforms such as Airflow
BS/MS Degree in Computer Science or related field
Excellent inter-personal and teamwork skills, having experiences working with oversea teams is a plus