Cloud Data Engineer - ClairvoyantSoft
- Pune, Maharashtra, India
- Apply by: Jan 01, 2026
- 5 Vacancy
- Local Candidates (India)
- Shift: First Shift (Morning)
- Career Level: Experienced Professional
- Degree: Graduate
- Experience: Year
- Full Time/Permanent
- Work from Office
Job Description
In this role you'll get …
- To work with an energetic team that strives to produce high quality scalable software and a team that is highly motivated to up the game every quarter.
- To work on building a near-real time data solution for large scale problems.
- to interact with the client on a daily basis and an opportunity to explore the Payments domain, understand the problem directly from the client and participate in the brainstorming sessions.
- To work on various niche technologies in the Data Engineering space like Spark Streaming, Kafka, HBase, Hadoop Ecosystem,Scala / Java /J2EE
- To work on various services on Cloud Platforms (AWS/GCP).
- To gain experience in building hybrid (On-prem and Cloud) data solutions
Must Have:
- Ability to code in Java
- Should be a regular coder.
- Should be able to write algorithms and convert the algorithms into Code.
- Good hold on Collections Framework, OOPs concepts. Ability to code to Interfaces.
Spark: - Good understanding of Spark (including streaming) architecture
- Spark Job life cycle
- Ability to map the Spark features to the problem at hand.
- Ability to explain the use cases for shared variables. Also the pros and cons of the same.
- Monitoring of jobs for correctness and performance issues and ability to identify the bottlenecks in the Job. Performance Tuning techniques
Cloud Engineering: - Cloud exposure to Rehost/ RePlatform/ Modernise legacy applications.
- Strong foundation on Cloud Capabilities and ability to apply them to real-life Data workloads.
- At least 2 years of experience in any of the public cloud (AWS/AZURE/GCP)
- Cloud Data Pipeline experience with (GLUE/DMS/S3)/Processing (EMR/LAMBDA/STEP Functions), Visualisation (Quicksight) capabilities and SQS/SNS/DynamoDB/Cloudformation
Hadoop Ecosystem:
- Understanding of HDFS architecture. Should be able to explain the read and write process in HDFS along with the internals (which component handles what task)
-
- Take the complete responsibility of the sprint stories' execution
- Be accountable for the delivery of the tasks in the defined timelines with good quality.
- Follow the processes for project execution and delivery.
- Follow agile methodology
- Good understanding of Hive: Ability to decide on what kind of tables (managed/external/partitioned/bucketed/ACID) are to be used. ● YARN and its role in Spark and MapReduce jobs.
Good To Have
- Spark and Hadoop ecosystem
- Design Patterns and Clean Code Principles
- Spring Boot and REST APIs experience
- No-SQL Databases like HBase/MongoDB Cloud Platform experience (GCP preferably) ● Python .
Role & Responsibilities: - Work with the team lead closely and contribute to the smooth delivery of the project.
- Understand/define the architecture and discuss the pros-cons of the same with the team Involve in the brainstorming sessions and suggest improvements in the architecture/design.
- Work with other team leads to get the architecture/design reviewed.
- Work with the clients and counter-parts (in US) of the project.
- Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Share Job
Related Jobs
- 1 Vacancy
- Mumbai
- Jun 17, 2024
- INR 80000 - INR 85000
- 1 Vacancy
- Mumbai
- Jun 17, 2024
- INR 82000 - INR 83000
- 2 Vacancy
- Pune
- Jun 17, 2024
- INR 75000 - INR 78000
- 1 Vacancy
- Pune
- Jul 26, 2022
- INR 95000 - INR 100000
- 4 Vacancy
- Mumbai
- Jul 26, 2022
- INR 85000 - INR 90000