Home

ETL Developer with Pyspark Exp: MN at Minneapolis, Minnesota, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=2099872&uid=

From:

Chandra N,

Siri Info

[email protected]

Reply to:   [email protected]

Role name:Developer |
Role Description:ETL Developer (5+years) to create data pipeline ETL Jobs using AWS Glue and PySpark within the financial services industry.Responsibilities:Work with a scrum team(s) to deliver product stories according to priorities set by the business and the Product Owners.Interact with stakeholders.Provide knowledge transfer to other team members.Creating and testing pipeline jobs locally using aws glue interactive session.Performance tuning of PySpark jobs.AWS Athena to perform data analysis on Lake data populated into aws glue data catalog through aws glue crawlers.Must Haves:Responsible for designing, developing, and maintaining ETL processes to support data integration and business intelligence initiatives.Need to closely work with stakeholders to understand data requirements and ensure efficient data flow and transformation using ETL tools and PySparkDevelop and implement ETL processes using with one of ETL tool and PySpark to extract, transform, and load data.4+ years of experience in ETL development with knowledge on Pyspark5+ years as an ETL DeveloperSQL expertAWS Glue WITH Python ( PySpark )PySpark Dataframe APISpark SQLKnowledge in AWS services (e.g. DMS, S3, RDS, Redshift, Step Function).Nice to Haves:Etl development experience with tools e.g. SAP BODS, Informatica.Good understanding of version control tools like Git, GitHub, TortoiseHg.Financial services experienceAgile |
Competencies:Digital : PySpark |
Experience (Years):6-8 |
Essential Skills:ETL Developer (5+years) to create data pipeline ETL Jobs using AWS Glue and PySpark within the financial services industry.Responsibilities:Work with a scrum team(s) to deliver product stories according to priorities set by the business and the Product Owners.Interact with stakeholders.Provide knowledge transfer to other team members.Creating and testing pipeline jobs locally using aws glue interactive session.Performance tuning of PySpark jobs.AWS Athena to perform data analysis on Lake data populated into aws glue data catalog through aws glue crawlers.Must Haves:Responsible for designing, developing, and maintaining ETL processes to support data integration and business intelligence initiatives.Need to closely work with stakeholders to understand data requirements and ensure efficient data flow and transformation using ETL tools and PySparkDevelop and implement ETL processes using with one of ETL tool and PySpark to extract, transform, and load data.4+ years of experience in ETL development with knowledge on Pyspark5+ years as an ETL DeveloperSQL expertAWS Glue WITH Python ( PySpark )PySpark Dataframe APISpark SQLKnowledge in AWS services (e.g. DMS, S3, RDS, Redshift, Step Function).Nice to Haves:Etl development experience with tools e.g. SAP BODS, Informatica.Good understanding of version control tools like Git, GitHub, TortoiseHg.Financial services experienceAgile |
Desirable Skills:ETL Developer (5+years) to create data pipeline ETL Jobs using AWS Glue and PySpark within the financial services industry.Responsibilities:Work with a scrum team(s) to deliver product stories according to priorities set by the business and the Product Owners.Interact with stakeholders.Provide knowledge transfer to other team members.Creating and testing pipeline jobs locally using aws glue interactive session.Performance tuning of PySpark jobs.AWS Athena to perform data analysis on Lake data populated into aws glue data catalog through aws glue crawlers.Must Haves:Responsible for designing, developing, and maintaining ETL processes to support data integration and business intelligence initiatives.Need to closely work with stakeholders to understand data requirements and ensure efficient data flow and transformation using ETL tools and PySparkDevelop and implement ETL processes using with one of ETL tool and PySpark to extract, transform, and load data.4+ years of experience in ETL development with knowledge on Pyspark5+ years as an ETL DeveloperSQL expertAWS Glue WITH Python ( PySpark )PySpark Dataframe APISpark SQLKnowledge in AWS services (e.g. DMS, S3, RDS, Redshift, Step Function).Nice to Haves:Etl development experience with tools e.g. SAP BODS, Informatica.Good understanding of version control tools like Git, GitHub, TortoiseHg.Financial services experienceAgile |
Country:United States |
Branch | City | Location:TCS - Minneapolis Downtown, MN
MINNEAPOLIS
Minneapolis, MN |

Keywords: sthree Minnesota
ETL Developer with Pyspark Exp: MN
[email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=2099872&uid=
[email protected]
View All
01:24 AM 22-Jan-25


To remove this job post send "job_kill 2099872" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 22

Location: Minneapolis, Minnesota