Urgent Role || Data Engineer 10 years || Contract || Hybrid at Remote, Remote, USA |
Email: [email protected] |
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=131243&uid= From: Bhuvaneswaran, VDart [email protected] Reply to: [email protected] Greetings from VDart Inc, Hope you are doing Great, This is Bhuvanesh from VDart Inc. If the below job descriptions suit your experience, please revert to my email with your updated resume and availability to contact Data Engineer Pipeline/ETL Contract Somerset, NJ (Hybrid) Skills required: AWS , ETL , Python , SQL and Scripting. Job Summary: We are looking for a Data Engineer to join our growing team and participate in design and build of AWS data ingestion and transformation pipelines based on the specific needs driven by the Product Owners and user stories. The candidate should possess strong knowledge and interest across big data technologies and have a background in data engineering. Candidate will also have to work directly with senior data engineers, product owners and customers to deliver data products in a collaborative and agile environment. They will also have to continuously integrate and ship code into our cloud production environments Job Description: As a key contributor to the data engineering team, the candidate is expected to: Participate in the design and development of ODHs data platform, which serves several lines of business within the Otsuka enterprise Design and build data ingestion and transformation subsystems, managing technical debt and risk, promoting best practices, and supporting others on the team Collaborate with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Work closely with product teams to deliver data products in a collaborative and agile environment Perform data analysis and onboarding activities as new data sources are added to the platform Evaluate innovative technologies and tools while establishing standard design patterns and best practices for the team Qualifications: Required: Experience in data processing and building Data Lake using AWS technologies such as S3, EKS, ECS, AWS Glue, AWS Firehose, AWS EMR, Lamda, AWS DataBrew, Relational Database Service (RDS), AWS Transfer Family Experience in designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies Experience with Python, SQL and Scripting Experience in Spark programming (Pyspark or Scala) Experience in data modeling and building data pipeline architectures Experience and knowledge with Master Data Management, Data Quality, Metadata Management, data profiling, micro-batches and streaming data loads Applied knowledge of working in agile, scrum, or DevOps environments and teams Applied knowledge of modern software delivery methods like TDD, BDD, CI/CD Experience with development lifecycle (development, testing, documentation and versioning) Preferred: AWS Certified Developer Associate AWS Certified Big Data Specialty Gitlab CI/CD Best Regards Bhuvaneswaran R Senior Technical Recruiter VDart Inc Ph: 678-720-4168 E Mail : [email protected] Website : www.vdart.com https://www.linkedin.com/in/bhuvaneswaran-r-134894190/ Join VDart's PreferredPartner Program https://www.vdart.com/suppliers/ Confidentiality Notice: The information contained in this message may be privileged and confidential and protected from disclosure. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer. http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=131243&uid= |
[email protected] View All |
07:34 PM 10-Nov-22 |