Senior Data Engineer
About The Position
Zesty allows engineering teams to effortlessly maximize cloud savings and minimize waste by automating cloud cost optimization. The Zesty AI-driven platform automatically adjusts cloud resources in real-time based on the application needs, achieving optimal cloud usage, and a dramatic reduction in cloud spend.
Zesty is growing, and we are looking for amazing data engineers to join our team. If you are interested in large scale distributed systems and want to influence the cloud's future, your place is with us! We have many hard data engineering challenges coming up, and we're looking for innovative, daring minds to join the team.
Do you have a zest for a better, cheaper cloud in you? Do you like to design a large scale data pipe? Do you have what it takes and want to become a multi-cloud expert while working in a fun and inspiring environment? Come work with us!
- Solve challenging problems in a fast-paced and evolving environment while maintaining uncompromising quality.
- Design, build, and maintain optimal data pipeline architecture for optimal extraction, transformation, and loading of data from various data sources, including external APIs, data streams, and data stores.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Design, create and maintain the infrastructure to ingest data into our data lake and data warehouse and provide frameworks and services for operating on that data.
- Design, create, and maintain the infrastructure for real-time streaming analytics, big data analytics, and machine learning analytics capabilities.
- Work with analysts and product managers to understand business priorities and translate requirements into data models.
- 8+ years of a proven track record in driving highly scalable and robust large-scale distributed backend systems from idea to paying customers.
- Expertise in large scale development in one or more of the following languages: Python, Go, C++, Java, C#.
- Track record in designing and building high performant, fault-tolerant, highly available, and secure distributed systems.
- Experience in designing, developing, and deploying production-grade, large scale, low-latency data processing systems
- Proven experience with cloud-based data pipeline and backend services, using either AWS, GCS, or Azure.
- Experience in Developing and maintaining ETL pipelines from multiple data sources to fuel data applications and business intelligence.
- Experience with large scale distributed databases.
- Experience in building and deploying systems in AWS cloud.
- Self-starter, self-driven to produce results and continually improve.
- Background in supporting a live service environment.
- BA / MS degree in computer science or a related field.
- Experience with Azure and GCP
- 2-3+ years of experience with Python
- Able to work with a distributed team.