Data Science

Data Engineer

Bengaluru, Karnataka   |   Full time

Roles & Responsibilities

- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS/ Azure regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.


Requirements

- 3+ years of experience in a similar role

- Computer Science / Statistics / Information Science degree from a top tier institute

System Design & Architecture

- Strong ability in designing and scaling up software systems

- Strong command on various architectural choices and tradeoffs

- Exposure to devops tools, security systems will be a huge bonus

Software Development & Debugging

- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc

- Strong hold on data structures & algorithms

Data Pipelining

- Strong command in building & optimizing data pipelines, architectures and data sets

- Strong command on relational SQL & noSQL databases including Postgres

- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

- Tools: Hadoop, Spark, HDFS etc

- AWS cloud services: EC2, EMR, RDS, Redshift

- Stream-processing systems: Storm, Spark-Streaming, Flink etc.

- Message queuing: RabbitMQ, Spark etc

Business Acumen

- Extreme sense of ownership, the key element to success

- Strong analytical and problem solving skills

- Experience supporting and working with cross-functional teams in a dynamic environment

Submit Your Application

You have successfully applied
  • You have errors in applying
Cover Letter