AWS Data Engineer AWS Data Engineer Loc:Mountain view, California only Locals at California, Maryland, USA |
Email: [email protected] |
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=1557981&uid= From: bhumika, Adifice Technologies [email protected] Reply to: [email protected] Position: AWS Data Engineer Locations: Mountain view, California (Hybrid) Type: 12+ Months Visa: USC/GC/GC EAD/H4 IP client : Persistent End Client : Confidential Job Description: Must Have : AWS Lambda , S3 , IAM , Spark , Big query , Python or Java , SQL, Databricks or Privacera Secondary skills: Terraform , GCP , Hive, Glue, and Unity Catalogs Qualifications 8+ years experience designing and developing web, software, or mobile applications. 3+ years experience building and operating cloud infrastructure solutions. BS/MS in computer science or equivalent work experience. Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE, C#, VB.NET, Python, or sometimes C++. Java and Python preferred. Expertise with AWS (IAM, VPC), Spark, and Terraform are preferred. Expertise with Databricks is a strong bonus. Expertise with the entire Software Development Life Cycle (SDLC), including: system design, code review, unit/integration/performance testing, build and deploy automation Operational excellence: minimizes costs and maximizes uptime Excellent communication skills: demonstrated ability to explain complex technical topics in an engaging way to both technical and non-technical audiences, both written and verbally Scope of Work The team will be working in one of the following areas: o multi-cloud data exploration Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Starburst, Privacera, and Databricks) and Intuit services (LDAP, data decryption) Testing user flows for data analysis, processing, and visualization with Python Spark notebooks and SQL running on distributed compute to join data between AWS S3 and GCP BigQuery Developing data pipelines in Python Spark or SQL to push structured enterprise tool telemetry to our data lake o Fine-grained access control for data exploration Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Databricks and Privacera) Evaluating Databricks capabilities to sync Hive, Glue, and Unity Catalogs Evaluating Privacera capabilities or building new capabilities (AWS Lambda with Python) to sync Intuit access policies with Unity Catalog Testing user flows for data analysis, processing, and visualization with Python Spark notebooks on distributed compute or Databricks serverless SQL runtime Responsibilities Develop and implement operational capabilities, tools, and processes that enable highly available, scalable, and reliable customer experiences Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Work cross-functionally with various Intuit teams including: product management, analysts, data scientists, and data infrastructure Work with external enterprise support engineers from Databricks, Starburst, and Privacera to resolve integration questions and issues Experience with Agile Development, SCRUM, or Extreme Programming methodologies Keywords: cplusplus csharp quality analyst sthree green card microsoft AWS Data Engineer AWS Data Engineer Loc:Mountain view, California only Locals [email protected] http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=1557981&uid= |
[email protected] View All |
10:54 PM 12-Jul-24 |