SnappPay is the first and leading BNPL provider in Iran, started in 2020. We are leveraging Financial Technologies to reshape Iranian’s experience of consumer credit. Supporting Snapp Group’s mission of enhancing Iranian people’s quality of life through Internet services, SnappPay’s mission is to bring financial freedom to all Iranian by providing them with better, smarter and more efficient solutions for payment and shopping.
About the Team
As a Data Engineer on the Data team, we deal with large-scale data pipelines and data sets that are critical and foundational for Snapp Pay to make decisions for a better customer experience. The team’s mission is to ensure high quality for all the critical data flows, work on a large scale of analytics data from the multiple Snapp Pay business lines, and build the software systems, data models, and data pipelines optimized for faster and accurate analysis.
About the Role
We are looking for a Data Engineer (Mid-Level, Senior) to join the Data Team at Snapp Pay. The role contributes to various exciting projects and collaborates with team members to develop and maintain data tools and solutions (e.g., pipelines, models) to acquire, process, and store data. This role also designs and develops large-scale data systems (e.g., databases, data warehouses, big data systems), platforms, and infrastructure for various analytics and business applications.
At Snapp Pay, you will have many opportunities to work with large data sets and build-essential tools that transform the data into insightful information and empower the company to make data-driven decisions.
- Develop and automate large-scale, High-Performance, Scalable data pipelines (batch and streaming) to drive faster analytics
- Ability to design new Data Architecture with excellent run-time characteristics such as low latency, fault-tolerance, and availability
- Maintain and monitor Real-Time Analytics and Big Data Systems to make sure about their reliability and resolve issues
- Collaborating with the Business Intelligence team, Data Scientists team, Ventures, and other teams to build data insights and help them achieve their business goals.
- BS/MS or more in computer engineering/science or related experience
- At least two years of programming experience in Python, Java or Scala, or Go
- SQL Knowledge and Experience with database systems (Click house, Mysql, Postgres, and other DBs)
- Specialized in Hadoop ecosystem (HDFS, Yarn, Hive, Spark)
- Hands-on experience with Kafka, Zookeeper, Logstash
- Experience working with one or more of these: Airflow, Debezium, Confluent Schema Registry
- Familiar with monitoring systems like Grafana, Prometheus, Exporters
- Experience in streaming technologies like Spark, Apache Flink, Nifi
- Hands-on experience in Linux, Docker, Kubernetes, and Virtualization
- Experience with data exploration and data visualization like Hue, Superset
- Experienced in Agile, Scrum, DevOps projects
- Good Communication and Teamwork Skills