Yes!
We Are Hiring!
We are looking for leaders who are inspired by the opportunity to define and build tomorrow’s leading cloud management solution.
We believe in
transparency
We share our success stories, failures, processes, numbers and everything in between. If you want to know about something that wasn’t shared with you – all you have to do is ask
We love
feedback
We embrace constructive feedback as a means for personal and business growth. Feedback can and should be given to anyone (e.g. manager, employee, colleague)
We are open minded
& flexible
When facing challenges, we always look for fresh ideas and ways to overcome them. We won’t hesitate to challenge the status-quo and we use the collective genius of our team as a means for improvement
We act as
one team
We genuinely trust each other which enables us to act as one team working together toward the same mission. All team members are equally important
We take
ownership
Everyone is “hands-on”. If you have an idea, even if it’s outside of the scope of your position, you should not be afraid to pursue it or suggest it to others
We check our ego
at the door
Ego obscures and disrupts everything: the planning process, the ability to take good advice, and the ability to accept constructive feedback. We operate with a high degree of humility
Ready for takeoff?
Join our rocketship!
We are Zesty.
Wouldn't you love to join us?
Jobs? We don't have jobs.
we have career opportunities!
Array
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
R&D
Senior Data Engineer
About The Position
Position Overview:
We’re looking for a Senior Data Engineer to help scale our data platform and deliver reliable, high-quality data services to both internal teams and external customers. If you thrive on solving complex data challenges, collaborating with diverse stakeholders, and building scalable systems that last, we’d love to meet you.
What You’ll Own:
- Design and implement scalable ETL pipelines using Apache Spark and related technologies.
- Build robust data services to support multiple internal teams, including product and analytics.
- Architect end-to-end data solutions and translate them into actionable engineering plans.
- Maintain clean, reliable data interfaces for microservices and systems requiring accurate, timely data.
- Collaborate closely with product teams to understand data needs and co-create solutions.
- Ensure observability, data quality, and pipeline reliability through monitoring and automated validation.
- Participate in code reviews, architecture discussions, and mentor less experienced engineers.
Requirements
- 6+ years of experience building and maintaining production-grade ETL pipelines.
- Hands-on experience with orchestration tools such as Databricks, Airflow, dbt, or similar.
- Proven ability to design systems that support diverse data consumers with varying SLAs.
- Deep understanding of data modeling, distributed systems, and cloud infrastructure.
- Strong background in Apache Spark (PySpark or Scala).
- Familiarity with microservices architectures and clean API/data contracts.
- Excellent communication and collaboration skills — you’re proactive, approachable, and solution-oriented.
- Ability to think in systems: conceptualize high-level architecture and break it into components.
Nice to Have
- Knowledge of data governance, lineage, and observability best practices.
- Experience with real-time streaming technologies (e.g., Kafka, Flink).
- Exposure to DevOps practices for data systems, including CI/CD, monitoring, and infrastructure-as-code.
- Previous experience developing customer-facing data products or analytics tools.
Apply for this position
Not finding a position that fits your skills?
We’re always looking for innovative, ambitious, creative people with great personality. Send us your CV and tell us what you’re good at. We’ll see what we can do 🙂