AWS_Cloud_Engineer With ETL at Malvern, Pennsylvania, USA |
Email: [email protected] |
https://jobs.nvoids.com/job_details.jsp?id=2276208&uid= From: Pavani, KK Software [email protected] Reply to: [email protected] We are hiring #AWS_Cloud_Engineer with expertise in #ETL processes for one of my client in interesting candidates please send resumes to [email protected] Visa status: : only H1B/GC/USC /GCEAD Position: #AWS_Cloud_Engineer with expertise in #ETL processes Location: Malvern, PA/Iselin, NJ(South) Role Description: | Key Responsibilities: Design, develop, and maintain robust ETL workflows to extract, transform, and load data from various sources to AWS Redshift and other cloud-based systems. Leverage AWS Lambda to build serverless computing solutions for real-time data processing and automation. Work with AWS data storage and processing services, including S3, Redshift, Athena, and Glue, to support data processing pipelines. Utilize Python and PySpark to create and optimize data transformation logic and analytical queries. Collaborate with cross-functional teams to understand data requirements and provide efficient data solutions. Monitor and optimize cloud-based infrastructure for performance, reliability, and cost efficiency. Troubleshoot and resolve data pipeline issues, ensuring minimal downtime and high data accuracy. Implement automation and monitoring tools to enhance reliability. | Competencies: | Digital : Python, Digital : Amazon Web Service(AWS) Cloud Computing, Digital : PySpark | Essential Skills: | Overview:We are seeking an experienced AWS Cloud Engineer with expertise in ETL processes, AWS Lambda, Redshift, and proficiency in Python and PySpark. This role requires hands-on experience with cloud-based architecture, data engineering, and working with large-scale data processing systems. The ideal candidate will play a critical role in building and maintaining scalable, efficient, and secure ETL pipelines within the AWS ecosystem.Key Responsibilities: Design, develop, and maintain robust ETL workflows to extract, transform, and load data from various sources to AWS Redshift and other cloud-based systems. Leverage AWS Lambda to build serverless computing solutions for real-time data processing and automation. Work with AWS data storage and processing services, including S3, Redshift, Athena, and Glue, to support data processing pipelines. Utilize Python and PySpark to create and optimize data transformation logic and analytical queries. Collaborate with cross-functional teams to understand data requirements and provide efficient data solutions. Monitor and optimize cloud-based infrastructure for performance, reliability, and cost efficiency. Troubleshoot and resolve data pipeline issues, ensuring minimal downtime and high data accuracy. Implement automation and monitoring tools to enhance reliability.Required Skills and Qualifications: Proven experience working with AWS services, including Lambda, Redshift, S3, Athena, Glue, and CloudWatch. Strong knowledge and hands-on experience with Python and PySpark for data transformation and processing. Expertise in building and optimizing ETL pipelines in a cloud environment. In-depth understanding of relational databases, data warehousing, and performance tuning in Redshift. Strong problem-solving and troubleshooting skills. Familiarity with infrastructure as code tools like AWS CloudFormation. Experience with version control (e.g., Git) and CI/CD pipelines. Solid understanding of data security, data privacy, and compliance frameworks. Ability to work in an agile, fast-paced environment.Preferred Qualifications: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified developer) | Desirable Skills: | Required Skills and Qualifications: Proven experience working with AWS services, including Lambda, Redshift, S3, Athena, Glue, and CloudWatch. Strong knowledge and hands-on experience with Python and PySpark for data transformation and processing. Expertise in building and optimizing ETL pipelines in a cloud environment. In-depth understanding of relational databases, data warehousing, and performance tuning in Redshift. Strong problem-solving and troubleshooting skills. Familiarity with infrastructure as code tools like AWS CloudFormation. Experience with version control (e.g., Git) and CI/CD pipelines. Solid understanding of data security, data privacy, and compliance frameworks. Ability to work in an agile, fast-paced environment.Preferred Qualifications: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified developer) | Keywords: continuous integration continuous deployment sthree green card New Jersey Pennsylvania AWS_Cloud_Engineer With ETL [email protected] https://jobs.nvoids.com/job_details.jsp?id=2276208&uid= |
[email protected] View All |
07:42 PM 21-Mar-25 |