Data Plus Math is a media measurement company that helps connect advertising exposures to real-world outcomes. Powered by millions of households of cross-screen viewing data, the company’s TV and Video attribution platform is used by cable operators, national programming networks, agencies and marketers to measure which components of their advertising campaigns are driving results. We work with some of largest media and entertainment companies, agencies and brands in the world to power the next generation of analytics and measurement for all of TV and Video.
We are looking for a savvy Big Data Engineer to join our world class team of analytics experts. The Big Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced engineer with a passion for data wrangling who wants to work on optimizing data systems and building them from the ground up. The Big Data Engineer will work with our database architects, data analysts and data scientists and ensure optimal data delivery architecture is consistent throughout ongoing projects. The right candidate will be excited by the prospect of building out and potentially helping re-design our company’s data architecture to support our next generation of products and data initiatives.
- Create and maintain optimal data pipeline architecture
- Build the infrastructure required for optimal ingestion, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Contribute and attend sprint retrospectives to help the team refine its processes and approaches.
- Provide documentation for solutions by developing well documented code, uml diagrams, and well structured code.
- Follow good object oriented design practices and SOLID design principles.
Qualifications and Skills
- 2+ years of experience in a Data Engineering role
- Bachelor’s degree or higher, preferably, with a concentration in a computational field such as Computer Science, Mathematics, Statistics, Physics, Engineering
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and/or NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong analytic skills related to working with unstructured datasets.
- Hands on experience with Git (Version Control) and UML Diagramming
- A desire to solve business problems with technology.
- Great communication skills, and the ability to influence stakeholders.
- Strong interpersonal skills and exceptional character
- Interest, willingness and demonstrated ability to quickly pick up new technology quickly
- A self-starter who brings energy, passion, and creativity to work every day