Company: | hp |
---|---|
Job Role: | Big data Software Engineer |
Experience: | (2-5years): |
Vacancy: | 4+ |
Qualification: | Bachelor Master degree In Computer science or related field |
Salary: | Upto$88500-USD/PA |
Location: | Vancouver WA USA |
Join us on Telegram | Click Here |
Apply Mode: | (Online) |
Start Date: | 16-12-21 |
- Plan and foster enormous scope, high-volume, superior execution and secure information handling pipelines with huge arrangements of information from various sources.
- Incorporate large information apparatuses and systems to give capacities needed by business capacities, while remembering of cost improvements.
- Work with group in all phases of complicated, secure and performant information arrangements advancement, including examination, plan, coding, testing, combination and checking.
- Audits and assesses item plans for design adequacy, security and protection consistence, and keeping quality rules and norms; gives substantial input to further develop item quality and alleviate hazard.
- Composes quality code total with testing, logging, and documentation for allocated piece of information framework or part; recognizes deformities and fix.
- Works together and speaks with project group in regards to project progress and issue goal.
- Bachelor or alternately Master's certification in Computer Science, or same
- 2 – 5 years applicable programming improvement skill
- 2+ long periods of involvement in programming advancement and authority of programming improvement essentials
- Solid scientific and critical thinking abilities with capacity to address complex calculations in programming
- Capable comprehension of large information appropriated processing standards, AWS Architecture, and SQL questioning devices
- Involved experience growing high accessibility, versatile, medium to huge scope ETL information handling pipelines with Java/Python or Scala
- Experience with Github, Apache Spark, AWS building blocks (S3, RDS, SQS, EKS, EMR, Lambda and so forth), Microservices/Rest API, Docker and Kubernetes, RedShift and Aurora MySQL, DBT, logging system like Splunk or ELK
- Energy for quality and thoughtfulness regarding subtleties
- Capacity to viably impart across both specialized and business crowds
- Capacity to work autonomously under a high speed climate and convey results under tension
- DBT experience
- Databricks or Jupyter journals experience
- Wind current experience
- Terraform experience
- Acquainted with representation devices like Looker or PowerBI
How to Apply Click on the Below link, it will redirect you to source Page and apply there.
Apply Now Click Here
Note: If you are facing any issues while applying jobs, please let us know by commenting below. We will solve your issue as soon as possible, Advice not to share card details pay money to anyone, and be aware from those types of traps
No comments:
Post a Comment