Sri Vidhya - Data Engineer |
[email protected] |
Location: Alpharetta, Georgia, USA |
Relocation: Open to relocate All over USA |
Visa: H1B |
Resume file: Sri_Vidhya Azure Power BI_1744204982711.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
Sri Vidhya
732-769-8913 [email protected] Summary: 9+ Years of professional experience as Data Engineer using Data Pipelines and ETL , Power BI with expertise in Data Mapping and Data validation. Objective: A challenging position that allows me to utilize my technical expertise in Azure, Power BI , ETL SQL and Data Warehousing to build effective technology solutions for Businesses & Organizations. Professional Expertise: Proficient in Azure Big data technologies such as Azure Data Lake Analytics, Azure Data Lake Store, and Azure Data Factory, Blob storage, Data Lake Storage, Azure Key Vault, Azure SQL with experience in data migration and processing. I have extensive experience extracting and loading data from relational databases such as Oracle into Azure Data Lake storage via the Azure Data Factory. Hands-on experience in Parquet file format, Dynamic Partitions, Bucketing for best Practice and Performance improvement. Experience with source control systems such as Git Hub, Bit bucket, and Jenkins and in CI/CD deployments. Developed and maintained data visualizations , collaborative dashboards and Paginated Reports using Power BI. Proficient in DAX expressions , SQL and Power Query in Power BI. In-depth knowledge in normalization/de-normalization techniques, logical and physical data modeling for relational databases and dimensional modeling for data warehouses. Expert in creating database objects such as tables, views, stored procedures, triggers, joins, indexes, user-defined data-types and functions. Good Knowledge on Database architecture of OLTP and OLAP applications, Data Analysis. Good experience in designing, Implementation of Data warehousing and Business Intelligence using ETL tools like IBM DataStage (Designer, Director) Created DS jobs to load data from various sources using transformations like Transformer, Lookup, Filter, Expression, Aggregator, Join, Funnel, Merge and Update Strategy, SQL, Stored Procedure and more. Excellent Knowledge on Slowly Changing Dimensions (SCD Type1, SCD Type 2, SCD Type 3), Change Data Capture, Dimensional Data Modelling, Data Marts, OLAP and FACT and Dimensions tables, Physical and Logical data modelling. Strong Expertise on developing UNIX Shell scripts for data load automation. Worked with wide variety of sources like Relational Databases, Flat Files, XML files and Scheduling tools like Control-M, Tivoli Workload Scheduler (TWS) and Cron. Expertise in doing Unit Testing, Integration Testing, System Testing and Data Validation for Developed ETL jobs. Very Strong skills and clear understanding of requisites and solutions to various issues in implementation throughout the Software Development life cycle (SDLC). Good analytical, problem solving, interpersonal, communication and presentation skills. Technical Skills: Cloud Services: Azure Cloud Services, Azure Data Factory, Azure Synapse, Blob, Azure DataLake Storage, Azure Cosmos DB Reporting Tools: Power BI Desktop ETL Tools: DataStage V7.1/7.5/8.7/11.3 Programming: SQL, Korn Shell, Bash Databases: Oracle11g/10g, SQL Server2008/2005. Scheduling Tools: Control-M Ver8.0, Control-M Ver9.0, Tivoli Workload Scheduler (TWS), Cron OS: Unix/Linux, Windows7/8/10. Software application/Other Tools: JIRA, Rational ClearQuest, HP Service Center, Universal Self Help (USH) Portal Domain, Service Now, SVN, Putty, WINSCP, MS-Office, SQL Developer, GIT etc. Education: Bachelor of Engineering (B.E) in Electronics and Communication Engineering from Anna University, Chennai, India, 2015. Professional Experience: Truist Bank, Remote Jun 2022 to till date Data Engineer Associate Responsibilities: Understand Business requirements, analysis and translate into Application and operational requirements. Designed a one-time load strategy for moving large databases to Azure SQL DWH. Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW). Implemented Copy activity, Custom Azure Data Factory Pipeline Activities. Experience in building and architecting multiple Data pipelines, end to end ETL and ELT process for Data ingestion and transformation in Azure. Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory. Create and maintain optimal data pipeline architecture in cloud Microsoft Azure using Data Factory. Prepared ETL design document which consists of the database structure, change data capture, Error handling, restart and refresh strategies. Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines; monitored the scheduled ADF pipelines and configured the alerts to get notification of failure pipelines. Maintained NoSQL database to handle unstructured data, clean the data by removing invalidate data, unifying the format and rearranging the structure and load following steps. Resolved the data type inconsistencies between the source systems and the target system using the Mapping Documents. Developed interactive visual reports, dashboards, and KPI scorecards using Microsoft Power BI desktop. Create high-quality, interactive dashboards and reports using Power BI, incorporating best practices for data visualization and user experience. Integrated data from diverse sources, transforming and cleaning data as needed to ensure consistency and reliability for reporting purposes. Utilized Data Analysis Expressions (DAX) to implement sophisticated calculations and measures for in-depth analysis within Power BI. Optimized the performance of Power BI reports and dashboards, addressing issues related to data refresh, query performance, and overall responsiveness. Work closely with data engineering teams to ensure the availability and quality of data necessary for Power BI reporting. Created Slicers and Filters to accurately depict information in visualizations according to the user's needs. Designed complex data intensive reports and visualizations in Power BI such as Table, Tree Maps, gauge, funnel, line etc for better business analysis. Tools: Azure Cloud Services, Azure Data Factory, Azure Synapse, Blob, Azure DataLake Storage,Power BI desktop, GIT. Standard Chartered Bank, India Dec-2020 Nov 2021 Senior Developer Responsibilities: Worked with Business Analyst to understand the Business requirements and Functional Specification Document. Responsible for developing the Unix shell scripts and setting up the Environments. Involved in preparing the Unit Test Cases for the developed ETL jobs. Involved in preparing the Technical Specification and Impact Analysis documents. Involved in developing the Control-M jobs and dependencies for developed ETL/Shell/PLSQL jobs. Developed the UNIX Shell scripts for the automation of different processes. Involved in participating the daily scrum meeting with the team members on the project status. Involved in generating the regulatory reports every month using Risk foundation tool. Provides L3 support to production support team to fix the production issues. Responsible for explaining the ETL job design flow to the End Customer and to Production Support Services team. Responsible for supporting SIT, UAT, PT and Production Implementation. Manage Version control in different environments using GIT and Stash. Involved in AIG preparation and implementation plan for production releases Perform all tasks related to day-to-day operations of several large implementations including managing users, creating, and managing custom objects and fields, handling bulk data migration, maintenance of page layouts, and installation and support of app exchange applications. Tools: DataStage V7.1/7.5/8.7/11.3, SQL, Oracle 11g, Control-M, GIT, SVN, Putty, WINSCP, MS-Office, Unix/LINUX, Windows 7/8/10, JIRA, Confluence and SharePoint. Capgemini, India Client Name: Standard Chartered May 2018 Dec 2020 Associate Consultant Responsibilities: Worked with Business Analyst to understand the Business requirements and Functional Specification Document. Responsible for developing the Unix shell scripts and setting up the Environments. Involved in preparing the Unit Test Cases for the developed ETL jobs. Involved in preparing the Technical Specification and Impact Analysis documents. Involved in developing the Control-M jobs and dependencies for developed ETL/Shell/PLSQL jobs. Developed the UNIX Shell scripts for the automation of different processes. Involved in participating in the daily scrum meeting with the team members on the project status. Involved in generating regulatory reports every month using Risk foundation tool. Provides L3 support to production support team to fix the production issues. Responsible for explaining the ETL job design flow to the End Customer and to Production Support Services team. Responsible for supporting SIT, UAT, PT and Production Implementation. Manage Version control in different environments using GIT and Stash. Involved in AIG preparation and implementation plan for production releases. Tools: DataStage V7.1/7.5/8.7/11.3, SQL, Oracle 11g, Control-M, GIT, SVN, Putty, WINSCP, MS-Office, Unix/LINUX, Windows 7/8/10, JIRA, Confluence and SharePoint. IBM India Pvt Limited, India Client Name: AT&T Inc Jun 2015 to May 2018 Datastage Developer Responsibilities: Participated in all phases of project life cycle, including understanding the requirement from Tier-3 System Engineers/End-Users, Requirement Analysis, Design, Coding, Unit Testing. Deployment of the code in System Testing (ST) and User Acceptance Testing (UAT) Environments. Support and Documentation. Keywords: continuous integration continuous deployment business intelligence database hewlett packard microsoft Delaware |