QA Test Lead - Pyspark, Databricks, SQL - 4 to 10 years - Gurgaon

Location: Gurgaon
Discipline: Technology
Job type: On-site (WFO) Jobs
Contact name: Randhawa Harmeen

Contact email: randhawa.harmeen@crescendogroup.in
Job ref: 40599
Published: 14 days ago

QA test lead – Pyspark, databricks, SQL – 4 to 10 years – Multiple locations (hybrid)

 

Summary – An excellent opportunity for someone having atleast 4 years of experience with expertise in Python, Pandas, SQL, structured data.  An individual must have good experience in On-prem technologies.


Location- PAN India

 

Your Future Employer- A global analytics and digital company serving industries including insurance, banking and others.

 

Responsibilities-

  • To develop and implement a comprehensive quality assurance strategy for data engineering projects, ensuring that data quality standards are met throughout the data pipeline.

  • To collaborate with cross-functional teams to define test plans and strategies for validating data pipelines, ETL processes, and transformations using PySpark, Databricks, and SQL.

  • To design and execute tests using SQL queries and data profiling techniques to validate the accuracy, completeness, and integrity of data stored in various data repositories, including data lakes and databases.

  • To adhere to formal QA processes, ensuring that the Systems Implementation (SI) team is using industry-accepted best Practices.

  • To ensure data accuracy in both SQL and the Data Lakehouse platform.

  • To act as a key point of contact for all QA aspects of releases, providing test services and coordinating QA resources internally and externally in SQL, PySpark/Python, Databricks Delta, and Delta Live Tables.

Requirements-

  • In-depth understanding of PySpark, Python, SQL, and Databricks Delta and Delta Live Tables with Unity Catalog (preferred)

  • QA best practices and methodologies to design, implement, and automate processes.

  • Knowledge on SQL & PySpark (Mandatory)

  • Experience extracting and manipulating data from relational databases with advanced SQL, HIVE, Python/PySpark.

  • Proficiency in design and test bed creation to support various test cases and accounts for variety and volume of test data.

  • 5-10 years experience in data engineering, preferably in data lake/data Lakehouse design-based platforms.

 

What is in for you-

  • A stimulating working environment with equal employment opportunities.

  • Growing of skills while working with industry leaders and top brands.

  • A meritocratic culture with great career progression.

 

Reach us- If you feel that you are the right fit for the role please share your updated CV at randhawa.harmeen@crescendogroup.in

 

Disclaimer- Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

 

Note: We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in one week. Your patience is highly appreciated.

 

Profile Keywords : ETL, data pipelines, testing, pyspark, SQL, databricks, Crescendo Global.