| Company: | FEDEX |
|---|---|
| Job Role: | Senior Data Engineer - Air Network Planning. |
| Experience: | (2-3 years). |
| Vacancy: | 10+ |
| Qualification: | Bachelor’s degree in Computer Science or equivalent. |
| Salary: | S$ 6000/month |
| Location: | Singapore. |
| Join us on Telegram | Click Here |
| Apply Mode: | (Online) |
| Deadline: | Not Mentioned |
- Design, build, and maintain robust ETL/ELT pipelines ingesting data from multiple internal and external sources
- Develop batch and, where applicable, streaming data pipelines using Azure and Databricks
- Implement data transformations, validations, and enrichment logic to produce analytics‑ready datasets
- Ensure data quality, lineage, observability, and reliability of production pipelines
- Optimize pipeline performance, scalability, and cost efficiency
- Contribute to the design and evolution of the Air Network data platform on Azure
- Work with Databricks (Spark, Delta Lake) to implement scalable data processing and storage patterns
- Collaborate with enterprise platform and security teams to ensure compliance with data governance, access control, and security standards
- Support migration of manual or semi‑automated processes into fully automated, production‑grade pipelines
- Partner with Data Analysts and Data Scientists to enable downstream analytics, dashboards, and advanced models
- Translate business and planning requirements into well‑structured data models and pipelines
- Support strategic Air Network initiatives by delivering reliable datasets for planning, performance evaluation, and optimization use cases
- Document data pipelines, schemas, and integration logic so solutions are reusable and maintainable
- Bachelor’s degree or equivalent in Computer Science, Data Engineering, Engineering, Information Systems, or a related discipline
- 4+ years of hands‑on data engineering experience
- Strong experience building production ETL pipelines and data workflows
- Azure data ecosystem (e.g., Azure Data Factory, Azure Data Lake, Azure Synapse or equivalent)
- Databricks / Apache Spark, including Delta Lake concepts
- Strong SQL skills and experience with data modeling (star/snowflake, analytics‑optimized schemas)
- Programming experience in Python (or equivalent) for data engineering workflows
- Experience with orchestration, scheduling, and monitoring of data pipelines
- Understanding of data quality checks, error handling, logging, and alerting
- Experience supporting BI tools (e.g., Power BI) through well‑designed data layers
- Familiarity with CI/CD, version control, and DevOps practices for data pipelines
- Exposure to operations research, optimization, or forecasting data use cases
- Experience working in Agile / SAFe environments
- Strong problem‑solving mindset and attention to data quality
- Ability to work across technical and business teams
- Clear communication and documentation skills
- Good planning, prioritization, and stakeholder management skills
Join Our What'sApp Groups Click Here
How to Apply Click on the Below link, it will redirect you to source Page and apply there.


No comments:
Post a Comment