Lead Data Engineer_ Hybrid in Manhattan, NY (Must be Local)_ USC and GC at Manhattan, New York, USA |
Email: [email protected] |
http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=2298140&uid= Lead Data Engineer Hybrid in Manhattan, NY (Must be Local) 1+ Year USC and GC INTERIVEW PROCESS They will be writing python in the interview; you will be doing SQl in the interviews They are doing 3 interviews by Zoom Tools and platforms they are on DB2, Postgres and dont currently have a plan to go on the cloud. But some cloud is useful for the future. Any RDBMS Analysis of the data Excel SQL PYTHON and Query capabilities. <-- CANDIDATE USES THESE. Someone who has Power BI Tableau Cognos is good. Alteryx AI data modeling tool, and Data IQU <--- USERS USE THESE TOOLS The person doesnt have to use all of these tools - but nice to have it, to work with the users to help them use these tools. TOP 5 SKILLS Data Analysis understand data, figure it out, und Understand the governance of data Understand the user requirement and translate it SQL do a query and put it someplace, analyze it and query it retrieves the data format it and get their acceptance and then do this in a production manner. The team is small - work with users, understand data and query, respond to users with a technical response, load it, analysis of the data, and translation of this data. <---- The size of the data sets is mid-level data sets. Having knowledge of ETL, you can explain it and do it in python or informatica that is great. ETL is a strong plus. SKILLS SETS SQL Python will be re-using a framework built in Python. Do analysis in Python Significant amount of scripting is used for loads etc. Will not be using informatica. If you have 7-10 years of Data, then it doesnt matter what the financial skills are. Its not needed. As lead Data Engineer, you will be a member of the Data Governance and Technology development team within the Non-Financial Risk Technology team, with specific focus in developing data solutions. In this role you will be primarily responsible for the development of data workflows, pipelines, views, and stored procedures, working with the user community, in addition to performing data analysis and firming up requirements from the users. You will also be working closely with data providers, data scientists, other data developers, and data analytics teams to facilitate the implementation of Key Responsibilities Design, implement, and maintain databases and datastores, ensuring optimal performance, scalability, and data integrity. Write complex SQL queries to analyse data and create views for reporting purposes for downstream consumers. Interact with business customers/users, understand their requirements, and translate them into technical requirements. Implement and enforce data security and access controls, ensuring compliance with industry standards and regulations. Conduct database capacity planning, forecasting future requirements, and implementing necessary scaling strategies. Troubleshoot and resolve complex database-related issues, collaborating closely with development and operations teams. Utilize Python/ scripting to automate database tasks, such as data extraction, transformation, loading, monitoring, and maintenance. Monitor and optimize database performance, identifying and resolving bottlenecks, and implementing performance tuning techniques. Stay up to date with the latest advancements in RDBMS systems, Postgres DB, scripting languages (Python/), as well as other data engineering tools and evaluate their potential application within our organization. Required: bachelors degree in computer science, software engineering, information technology, or related field required. At least 10+ years of experience in data development and solutions in highly complex data environments with large data volumes - Part of the analysis is modeling the data / modeling and cleaning etc. May be a data architect type. At least 7+ years of SQL experience with the ability to write ad-hoc and complex SQL queries. At least 5+ years of Python experience with ability to write good python code. Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication skills both verbal and written. Capable of collaborating effectively across a variety of IT and Business groups, across regions, roles and able to interact effectively with all levels. Self-starter. Proven ability to manage multiple, concurrent projects with minimal supervision. Can manage a complex ever changing priority list and resolve conflicts to competing priorities. Strong problem-solving skills. Ability to identify where focus is needed and bring clarity to business objectives, requirements, and priorities. Good to have: Experience with Reporting tool like Tableau, Power BI would be a plus. Experience on any ETL tool is added advantage. Knowledge of using GIT and Jenkins/CI_CD pipeline for automated code deployment. Knowledge and Experience of Graph DB (Star dog DB) will be an added advantage, including ontology modelling, data integration, semantic graph querying, and reasoning capabilities. <--- THIS HELPS, if someone has this its a STRONG PLUS. You can do the analysis with a Graph. Experience with Agile development methodologies Thanks, and Regards Sanju Singh Senior Technical Recruiter Ph.:- 972-290-1157 Email: [email protected] Sbase Technologies Inc., Office Address:- 2511 Texas Drive, Irving, TX 75062, -- Keywords: artificial intelligence business intelligence database active directory information technology golang green card New York Texas Lead Data Engineer_ Hybrid in Manhattan, NY (Must be Local)_ USC and GC [email protected] http://bit.ly/4ey8w48 https://jobs.nvoids.com/job_details.jsp?id=2298140&uid= |
[email protected] View All |
11:27 PM 28-Mar-25 |