Home

Hari Krishna - Data Analyst | Bi Developer
[email protected]
Location: Alto, Texas, USA
Relocation: YES
Visa: H1B
Resume file: Hari Data Analyst-BI developer_1747670684084.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
+19453484477 Extension : 417
[email protected]
linkedin.com/in/satish-savoju-08a192315


Summary:

Data Analyst with 8 years of experience in data analysis, modeling, and warehousing. Proficient in SQL (T-SQL, SSIS, SSRS, SSAS) for database management, ETL development, and performance optimization.
Skilled in Python (Pandas, NumPy) and R for advanced data analysis and predictive modeling.
Strong expertise in designing conceptual, logical, and physical data models for OLTP & OLAP systems.
Experienced with Snowflake Data Warehouse including schema design, performance tuning, data sharing, and utilizing features like Virtual Warehouses, Time Travel, and Streams.
Proficient with cloud platforms such as Azure (Synapse, Data Factory, Databricks, AAS) for scalable data processing and management.
Developed and Implemented Operational data store with continuous and scheduled data ingestion.
Implementing Data Marts, Data Warehouses, Dashboards, Reports, and Data Mining by applying the latest technologies to translate business needs into user - friendly actionable insights.
Adept at data visualization using Power BI, Tableau, and Looker, creating interactive dashboards to support business decisions. Proficient in DAX for complex calculations and hands-on experience with Alteryx/Informatica for workflow automation. Created Azure Data Factory Pipeline to load data from On-premises SQL Server to Azure Data Lake store.
Skilled in Agile and Waterfall methodologies, requirement gathering, and troubleshooting SQL queries and ETL processes.
Experienced in MS Excel (VLOOKUP, Pivot Tables, Macros), SAS, and SAS SQL for data manipulation and reporting. Work closely with business teams to gather requirements and provide presentations to support decision-making.
Several implementations have involved designing, developing, communicating, and implementing solutions from initial conception with business owners through user training and adoption.
Developed Linked reports for live data and Migrated SSRS ad-hoc reports to Power BI reports using Report builder.
Created dynamic parameterized MDX queries for SSRS reporting from cube data.
Skills:
Programming Languages: Python, SQL, T-SQL, NoSQL, R, JavaScript, HTML.
Data Processing & Streaming: Pandas, NumPy, Airflow, Snowflake, ETL, Teradata, T-SQL, VB scripts.
Databases: SQL, MySQL, PostgreSQL, Oracle.
Frameworks: Flask, Django, SSIS, SSAS, DTS.
Operating Systems: Windows, Linux
Reporting Tools: Power BI, Tableau, Looker, SSRS
Cloud Technologies: AWS, Azure
File Formats: CSV, JSON, Parquet, XML
Data Integration: ETL/ELT processes
Data Warehousing: SQL server analysis services, azure synapse analysis services, Azure analytics services.


Professional Experience:
Client: Capital One 05/2023 - Present
Role: Senior BI/Data analyst
Developed interactive Power BI dashboards and reports, optimizing data models for better performance and usability.
Designed and implemented DAX queries for advanced calculations, time intelligence functions, and dynamic measures in Power BI.
Implemented Row-Level Security (RLS) in Power BI, ensuring restricted data access based on user roles and permissions.
Performed data analysis using Azure Databricks and Snowflake, leveraging PySpark, SQL, and Python for big data processing and insights generation.
Created complex Power BI data models, defining relationships, calculated tables, and aggregations for enhanced report performance
Designed and implemented ETL pipelines using Azure Data Factory, SSIS, and Snowflake, ensuring efficient data integration from various sources.
Created notebooks in Azure Databricks to transform data using Pyspark.
Built an ETL pipeline in Azure Data Factory to integrate data from multiple sources into Azure Synapse Analytics.
Automated data ingestion from Dynamics 365 to a data lake, ensuring real-time data availability.
Worked extensively with Oracle and SQL Server, writing advanced SQL queries, stored procedures, views, and functions for data extraction and transformation.
Created Python scripts for data preprocessing, automation, and statistical analysis in Azure Databricks and Power BI environments.
Optimized SQL query performance using indexing, partitioning, and query tuning techniques in Snowflake, Oracle, and SQL Server.
Performed DBA role for: server installation and configuration, database backups, space monitoring, software updates. performance monitoring and tuning using Query Store and Management Studio, Column Store Indexes, In-memory tables.
Integrated Power BI with SQL Server, Snowflake, and Azure Data Lake, enabling real-time and historical trend analysis.
Conducted data profiling, cleansing, and transformation in Azure Databricks and Snowflake, improving data quality for reporting.
Automated Power BI report refresh schedules using Azure Data Factory and Power BI Service, ensuring timely data updates.
Developed paginated reports in Power BI Report Builder, enhancing formatted reporting for business users
Implemented performance tuning in Power BI, reducing DAX query execution time and optimizing visuals for faster load times.
Power BI and SSRS reports/dashboards with drill-down and drill-through from high level visualizations or summary data to detail reports, KPIs. performance of report visualization was maximized using the tabular model, summary tables, indexed views, and preloading with cached data.
Master Data Management was used to management of some manually managed data.
Environment: Power BI, DAX, SQL Server (SSMS, SSIS, SSRS), Oracle, Snowflake, Azure Data Factory, Azure Databricks, PySpark, T-SQL, PL/SQL, Python (Pandas, NumPy), Azure Synapse, Power BI.

Client: Healthy MD 05/2022 -05/2023
Role: Data Analyst / Bi developer

Designed and developed Tableau dashboards and reports, implementing advanced visualizations, calculated fields, and optimized extracts for interactive data exploration.
Conducted data analysis in Azure Databricks and Snowflake, building data profiles to identify key market opportunities.
Developed and optimized SQL queries in Azure SQL Database and Snowflake, improving query efficiency for reporting and analysis.
Performed data analysis in Azure Databricks and Snowflake, utilizing PySpark and SQL to clean, transform, and profile large datasets for business insights.
Built and optimized data models in Snowflake, leveraging Streams, Tasks, and Time Travel to track data changes and enhance analytical reporting.
Developed and maintained ETL workflows in SSIS and Informatica, extracting, transforming, and loading data from multiple sources into Snowflake for analysis.
Performed ETL Developer role for: data loading and process logging using SSIS and stored procedures. automation of loading and tabular model processing using daily and hourly cascading SQL Agent jobs.
Implemented incremental data load in ADF to handle large datasets efficiently.
data driven SSRS email subscriptions were used as alerts for data anomalies. reporting involved handling 8 different selectable currencies.
Developed stored procedures in SQL to optimize data transformations and improve query performance.
Created data mapping documentation to streamline onboarding for new team members.
Monitored and scheduled ADF pipelines using triggers to automate data refresh processes.
Designed, developed, and maintained ETL processes using SQL, Power BI, and cloud services.
Implemented data transformations for integrating datasets from TDR, Halo, and external sources.
Developed and optimized stored procedures, scripts, and queries for data migration and validation.
Led troubleshooting and debugging efforts for data pipeline issues, working closely with engineering teams.
Collaborated with cross-functional teams on data governance, security, and business reporting.
Retrieves product data from SQL Server into Excel after selecting filter options from dropdown lists like an SSRS report. saves user inventory plan data entered into a handful of editable columns back to SQL Server.
Integrated Python scripts within Tableau to automate data transformations, enhance analytics, and implement predictive modeling for business insights.
Developed reusable notebooks in Azure Databricks with parameterized variables to use with ETL pipelines
Conducted Tableau analysis, implementing LOD expressions, table calculations, and dynamic parameterized reports for enhanced decision-making.
Collaborated with business teams and stakeholders to gather requirements and develop scalable Tableau dashboards with actionable insights.

Environment: Tableau, SQL Server (SSMS, SSIS, SSRS), Python (Pandas, NumPy, Plotly), Azure Databricks, Snowflake, PySpark, Azure Synapse, ETL.
Client: Kreayotoo Solutions 04/2017 -12/2021
Role: ETL developer
Location: Hyderabad, India

Worked closely with business teams to understand requirements and deliver actionable insights, ensuring that data solutions effectively supported business objectives.
Designed and implemented Data Mart using Star Schema for efficient storage and retrieval of HR data, enhancing the reporting process.
Developed and optimized complex SQL queries (CTEs, Stored Procedures, Views, Triggers, Indexes) in SSMS to improve database performance and support advanced reporting needs.
Built and optimized ETL workflows using Informatica and SSIS, transforming and loading data from multiple sources into SQL Server, ensuring accurate and timely reporting.
Leveraged advanced data transformations in Informatica (Lookup, Derived Columns, Conditional Split, Data Conversion) to streamline data integration and improve workflow efficiency.
Created interactive Power BI dashboards and reports to deliver real-time HR metrics, enabling leadership to provide data-driven solutions.
Performed parsing JSON files to create at datasets using Pyspark.
Designed and maintained OLAP cubes in SSAS for fast, multidimensional data analysis, facilitating efficient querying and in-depth reporting.
Developed and deployed SQL Server Reports (SSRS), including tabular and drill-down reports, to deliver insights to key stakeholders such as clients, vendors, and management.
Highly proficient in Development, Implementation, Administration and Support of ETL processes for Large-scale Data warehouses using Informatica Power Center.
Developed and optimized ETL workflows using AWS services such as AWS Glue and AWS Lambda, integrating data from various sources and automating data transformations for efficient processing and storage in Amazon S3 and Redshift.
Leveraged AWS tools like AWS Data Pipeline and Amazon RDS to design scalable, serverless ETL solutions, streamlining data extraction, transformation, and loading processes while ensuring high-performance and low-latency data handling.
Environment: MS SQL Server 2016/2017, SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), Power BI, SharePoint, MS Excel, Informatica, AWS, AWS Glue, AWS Lambda, AWS S3, AWS redshift.


Education:
Master Of Science, University of Central Missouri 01/2022 - 05/2023 | Missouri, USA
Bachelor Of Engineering, JNTU Kakinada 06/2014 05/2018 | Guntur, India

Certifications:
AWS Cloud Practitioner Focus on EC2, S3, Lambda, Redshift
Microsoft AI Fundamentals Prompt engineering, Azure AI Studio
Keywords: artificial intelligence business intelligence sthree active directory rlang microsoft procedural language Maryland

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];5502
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: