| Company: | REPLICON |
|---|---|
| Job Role: | Data Engineer - ETL Data Warehouse. |
| Experience: | Freshers. |
| Vacancy: | 50+ |
| Qualification: | Any Degree. |
| Salary: | ₹ 4.5 LPA. |
| Location: | Bangalore - Karnataka - India. |
| Join us on Telegram | Click Here |
| Apply Mode: | (Online) |
| Deadline: | Not Mentioned |
- Big Query
- PostgresSQL
- Tableau
- Data Studio
- Python
- Typescript, Javascipt
- Angular
- Talend
- Apache Airflow
- Apache Beam
- GCP (Cloud Functions, Firebase, Pub/Sub, App Engine, Cloud Storage)
- An innovative, market-leading enterprise solution with a growing customer base
- An exciting, nurturing culture that rewards a determined attitude of getting things done and problems solved
- Agile development environment delivering functionality to production every day
- Global engineering teams with a transition happening from primarily in-office (pre-pandemic), completely work-from-home (pandemic), to a hybrid in-office & work-from-home model (future, 2022 and onwards)
- Team lunches, team events, flexible work environment
- Relentless desire to continuously improve
- Clear goals with an opportunity to learn new things and explore a variety of avenues
- Smart, well-connected global teams that work like a family with the sole aim to delight customers
- An open-minded approach to continuous improvement of people, product and processes
- Proficiency in SQL and No SQL Databases (Redshift, BigQuery , Firestore, DataStore, PostgresSQL any two would be fine)
- Ability to develop in python
- Previous experience with Apache Airflow, Apache Beam would be a plus.
- Experience on data visualization tools Tableau and/or Power BI
- Have experience with GCP but any cloud infrastructure platform is good
- Should have basic understanding of web technologies
- Understanding of how to implement a robust CI/CD pipeline
- Knowledge of existing Technology stack for data management, data ingestion, capture, processing and curation: Kafka, pubsub Map Reduce, Hadoop, Hive, Hbase, Spark, Flume, Hive, Impala, etc
- Create, maintain data warehouse, data pipeline architecture
- Work with stake holders to understand business requirements and own delivery or the proposed solution
- Improve reliability of the solutions
- Identify relevant improvements we should make to our solutions
- Willingness to learn and embrace emerging technologies
- Working on multiple concurrent projects belonging to different domains
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can
How to Apply Click on the Below link, it will redirect you to source Page and apply there.

No comments:
Post a Comment