Satya Deep - DevOps Engineer/SRE |
[email protected] |
Location: Austin, Texas, USA |
Relocation: No |
Visa: US Citizen |
Resume file: Satya-CV_1749222154049.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
Name: Satya Deep PH: 737-420-8677
Mail: [email protected] Over 10+ Years of Experience in Information Technology includes Systems Administration & DevOps management methodologies and production deployment which include Packaging, Deploying and Application Configurations. Experience with CI (Continuous Integration) and CD (Continuous Deployment) methodologies using Jenkins. AWS in a DevOps Culture through Continuous Integration (CI) & Continuous Deployment (CD) Exhaustive hands-on experience in Windows and UNIX environments. Exhaustive hands-on experience in ANT, Maven, Shell programming, ruby programming. Strong hands-on experience with scripting languages like Python, Ruby, PowerShell, and JavaScript. Expert in Server Builds, Installs, Upgrades, Patches, Configuration and Performance Tuning in Red Hat Linux/Windows Server 2003/2008 on VMware virtualized environments. Expert in developing PowerShell scripts and Azure Data brick Resource Management (ARM) templates to automate the provisioning and deployment process. Proficient in creating Resource groups using the Azure Data brick resources and assigning them to various groups. Worked Extensively on Microsoft office suite. Expertise working on VMware Converter 4.3/5.0 to do virtual to virtual migration as well as Physical to virtual server migration. Successfully implemented automated deployment pipelines based on PostgreSQL, resulting in the seamless integration of database updates within CI/CD workflows. This initiative led to increased development and operational efficiency. Experience in Define and design robust observability strategies to monitor the health and performance of applications, infrastructure, and services. Developed and maintained CI/CD workflows for .NET applications using Azure DevOps, integrating with Jenkins for build automation and Octopus Deploy for release management. Experience in Develop and manage automation scripts using Puppet to configure, deploy, and maintain infrastructure across environments. Expertise in AWS database services RDS, DynamoDB, DocumentDB. Experience on Collaborate with DevOps and security teams to prioritize and remediate high-risk vulnerabilities discovered by Tenable in cloud-hosted services. Having a strong understanding of Kafka's architecture, including topics such as topics, partitions, producers, consumers, and brokers, is essential for a DevOps engineer working in a Kafka environment Demonstrated mastery in implementing CI/CD pipelines, automating the build, testing, and deployment procedures for PCF applications. Working with Automating the Google cloud platform Infrastructure using GCP Cloud Deployment Manager and Securing the GCP infrastructure using Private subnets, Security groups, NACL(VPC), etc. also Configuring and deploying instances on GCP environments and Datacentres and familiar with Compute, Kubernetes Engine, Stackdriver Monitoring, Elastic Search and managing security groups. Experience in building, designing, and implementing scalable cloud-based web applications for PaaS, IaaS or SaaS using AWS, Azure. Expertise in component repository management tools like Nexus, Jfrog Artifactory etc. Experience in Infrastructure on AWS using IAM, API Gateway, CloudTrail, Cloud Watch, Amazon Simple Queue Service (Amazon SQS), AWS Kinesis, Lambda, NACL, Elastic Beanstalk, Redshift, and CloudFormation. Managed Kubernetes cluster in the rollback and rollouts methods in the deployment strategies. Integrated Istio and helm packages with Kubernetes clusters for the service mesh. Experience in Google Kubernetes Engine GKE with Spinnaker for continuous delivery of applications to the engine in different stages, Google App services for deploying microservices in different environments, GCE instances, assigning IAM for projects, Cloud Shell CLI. Skilled in monitoring servers and infrastructure using Splunk, Datadog, Prometheus, Grafana and CloudWatch. Deployed microservices-based applications on Azure Kubernetes by using Azure Kubernetes Service (AKS), Ingress API Gateway, MySQL, SQL Databases, and Cosmo DB for stateless storage of external data, and set up reverse proxy Nginx servers and encrypting with SSL and TLS Skills. Having good Experience in implementing security and compliance policies in a production environment and have Strong analytical and critical thinking skills. Implemented the Docker for wrapping up the final code, setting up the development, and testing environment using Docker Hub, Docker Swarm, and Docker Container Network. Experience with installation and configuration of Dynatrace monitoring tools and creating email alerts and threshold values using Dynatrace for our environment. Created Splunk/Dynatrace dashboard for application performance monitoring. Configured and maintained the Shell/Perl deployment scripts for Web logic and UNIX servers. Produced comprehensive architecture strategy for environment mapping in AWS that involved Active Directory, LDAP, AWS Identity and Access Management (IAM) Role for AWS API Gateway platform. Implemented Microservices with AWS EC Docker, code build, packaging, pipeline, provisioning, deploy, commit process, and change management. Extensive knowledge of various Linux/Unix File systems and Experienced with the installation, configuration and volume/File system management using Veritas Volume Manager (VxVM), Logical Volume Manager (LVM) on AIX and LINUX. Expertise in file system concepts like LVM, SVM, creating new file systems, increasing, and decreasing file systems, mounting file systems, unmounting file systems, and troubleshooting Disk space issues. Created and maintained user accounts, profiles, security, rights, disk space and process monitoring. Good knowledge and hands on experience of writing Perl and BASH scripts. Experience in Installation, administration, and configuration of VIOS and VIO client LPAR through HMC. Experience migrating from traditional MPPs (like Teradata and Netezza) to cloud platforms like Azure, AWS, Snowflake and Databricks. Technical Skills: WORK EXPERIENCE Cloud Environment Amazon Web Services, Microsoft Azure, Google Cloud Platform (GCP), OpenStack Containerization Tools Docker, Docker Swarm, Kubernetes Configuration Management Chef, Ansible, Puppet, Continuous Integration Jenkins, Bamboo, Hudson Build tools Maven, MSbuild Version Control Git, Bitbucket, TFS, Subversion, IBM Rational Clear Case Monitoring tools Data Dog, Dynatrace Nagios, Splunk, ELK Stack, Grafana, Prometheus Ticketing tools Jira, Scrum, Web servers Apache Tomcat, Nginx, Web Sphere, JBoss, WEBLOGIC Virtualization tools VMWare, VirtualBox, Hyper-V, Vagrant Operating Systems RedHat, CentOS, Ubuntu, Debian, Windows, Mac OS & IOS Repository management tools Nexus, JFrog Artifactory Testing tools Selenium, JUnit, Programming & Scripting Language Python, C, JAVA, XML, SHELL, PERL, RUBY. Databases SQL Server, Oracle, MySQL, PostgreSQL, MS Access Networking HTTP, HTTPS, TCP/IP, UDP, DNS, FTP, SSH, SNMP, SFTP Development Environments Packer Cloud Storage: AWS EC2, VPC, EBS, SNS, RDS, EBS, Cloud Watch, Cloud Formation A Configuration, S3, auto scaling, Cloud Trail. VMWare, EMC Clariion/Symmet Veritas, SAN, NAS. Virtual-Infrastructure Technologic Virtual machine, Hypervisors, ESX Host, VMware. Education: Bachelors in Computer Science from JNTU-2012 Resilient Solutions 21, New Mexico Sep 2024 to Now Embedded Software Engineer (DevSecOps). Responsibilities: I have hands-on experience with SDK-based bundle creation, where I handle everything from setting up new applications to deploying them securely with tools like Docker, Kubernetes, and Jenkins. I build and deploy applications focused on being scalable and secure, using CI/CD pipelines and version control tools like GitLab and GitHub. Implemented Agile DevOps practices by automating CI/CD pipelines using Jenkins, GitLab CI/CD, or CircleCI. Implemented CI/CD pipelines using Jenkins, GitLab CI, and Terraform to automate deployment of infrastructure and software updates in the data lake environment. Monitored sprint progress using Agile tools like Jira, Rally, or Azure DevOps. I m skilled at creating Helm charts to automate deployments in Kubernetes, which makes app rollouts faster and more reliable across different environments. Implemented and managed infrastructure to support Databricks clusters on cloud platforms like AWS, Azure, or GCP using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Worked on Automate provisioning of compute, storage, and networking resources for ML workloads on AWS. Managed containerized environments using Amazon ECS, EKS, and Docker. Worked on Create and maintain reusable Puppet modules for consistent and scalable deployments. My scripting knowledge in Bash, Python, and PowerShell allows me to automate the app bundle creation process, saving time and reducing manual steps. Embedded Checkmarx One scans into Jenkins, GitHub Actions, and Azure DevOps pipelines, enabling real-time feedback on security risks during code commits and pull requests. Managed ETL workflows with tools like Apache Airflow and AWS Data Pipeline to ensure smooth data flow between various systems and the data lake. Deployed and maintained OpenShift Virtualization (KubeVirt) environments to run hybrid workloads, managing both VMs and containers within a unified platform. Automated provisioning and lifecycle management of virtual machines using GitOps (ArgoCD) and Infrastructure as Code (Ansible/Terraform). Integrated VM workloads into existing CI/CD pipelines using Tekton and Jenkins, enabling consistent deployment strategies across containers and virtualized apps. Implemented VM backup, snapshot, and disaster recovery strategies using native OpenShift virtualization features and persistent storage backends. Used DynamoDB Streams to trigger real-time data processing via AWS Lambda and Enable encryption using DynamoDB s built-in encryption mechanisms. Built and maintained CI/CD pipelines for ADF and infrastructure deployments using Azure DevOps, GitHub Actions, and YAML pipelines. Automated Data Factory pipeline deployments using ARM templates and ADF Utilities, enabling version-controlled and environment-specific deployments. Experience in Anticipate potential system failures or degradation by analyzing trends, patterns, and anomalies in the observability data before they escalate into incidents. Involved in after critical incidents, lead the postmortem process, identify lessons learned, and implement improvements in observability practices to avoid similar incidents in the future. Implemented a Continuous Delivery framework using Jenkins, Puppet, Maven, and Nexus in the Linux environment. Built Dashboards in Datadog to monitor the infrastructure, Configured and installed Splunk agents on top of Linux server and container to send logs to Splunk to collect logs. Used tools like Prometheus, Grafana, New Relic, or Datadog to monitor API response times, error rates, and system uptime. I also manage the secure upload of app bundles to Image Dropbox, handling version control, compliance, and secure storage in cloud platforms like AWS and Google Cloud. Configured continuous backups with Amazon DocumentDB's automated backup feature, Test restore operations to validate disaster recovery plans. Worked on Set up monitoring tools to observe the health and performance of the APIs between Flintfox, CMT, and other integrated systems. Working closely with system admins and various teams, I use Jenkins and Ansible to keep infrastructure running smoothly and troubleshoot any pipeline issues that come up. Implemented monitoring solutions (e.g. Prometheus, Grafana) for Node.js applications, providing real-time insights into performance, resource utilization, and potential issues. Written scripts in terraform when required and Parser files of complexity when required. Worked on converting existing AWS infrastructure to server less architecture, deploying via Terraform formation templates. Automate Spark job deployments using CI/CD pipelines (e.g., Jenkins, GitLab CI/CD) and Integrate Spark with other big data tools (e.g., Hadoop HDFS, Kafka, Hive). Integrated Vertica with big data ecosystems (Hadoop, Spark) for analytics workflows. Integrated Cassandra with data processing tools like Apache Spark and Kafka for real-time analytics. Designed and implemented framework using Python and generating Terraform templates to create security groups in GCP. Deployed and configured UNIX-based systems using automation tools like Ansible, Chef, Puppet. Implemented CI/CD pipelines on UNIX platforms for efficient software deployment and updates. Expertise in Azure Infrastructure as Code (IaC) tools such as Azure Resource Manager (ARM) and terraform for creating and managing cloud resources. Automated IBM MQ administration tasks using Python scripts, Ansible, and JMS API, reducing operational overhead. Set up monitoring and alerting for MQ queues and channels using IBM MQ Monitoring Agent, Prometheus, and Grafana. Integrated Redshift with other AWS services such as S3, DynamoDB, and Kinesis for seamless data ingestion. Developed and managed data pipelines for Redshift using AWS Glue, Apache Spark, or custom ETL scripts. Throughout the application lifecycle, I take a security-first approach to enhance resilience and agility across all deployments within the DevSecOps framework. Fidelity Investments Westlake, Texas Feb 2023 to Aug 2024 Sr. DevOps Engineer/Site Reliability Engineer Responsibilities: Administered and Engineered Jenkins for managing weekly Build, Test and Deploy chain as a CI/CD process, SVN/GIT with Dev/Test/Prod Branching Model for weekly releases. Migrated VMWARE VMs to AWS and Managed Services like EC2, S3 Bucket, Route53, ELB, EBS using Ansible automation. Write installation scripts as Ansible playbooks to promote software (WebLogic, Tomcat, Oracle HTTP Server, Apache HTTP Server, Oracle Data Integrator, RabbitMQ, REDIS Cache) from development environments through test environments and into production environments, ensuring that each environment is correct and consistent so that any of the DEV/QE/STAGE environments can be rebuilt faster. Aligned IaC practices with Agile methodologies to deliver scalable and reliable infrastructure iteratively. Proficient in designing and implementing scalable and high-performance infrastructure solutions on GCP and OpenShift, utilizing load balancing, caching, and auto scaling techniques Worked on migrating the Legacy application into GCP platform and managing the GCP services. Utilize and modify Jenkins Pipeline build for automating the creation of Kubernetes clusters for application deployments. Designed, deployed, and maintained secure FTP/SFTP servers for automated file transfers, ensuring high availability and data integrity. Implement version-controlled Puppet code using Git or other version control systems. Worked on Write and optimize Scala code, especially for distributed systems, to ensure performance and reliability in production. Designed and managed Amazon Redshift clusters to support scalable data warehousing and analytics solutions. Automated Redshift cluster provisioning, resizing, and termination using AWS CloudFormation or Terraform. Set up automated reporting to provide regular updates on service performance, Tools; Prometheus, Grafana, Datadog, New Relic, etc. Experience in Implementing security best practices for data systems (BigQuery, Cassandra), containerized applications (Docker), orchestration platforms (Kubernetes), and messaging systems (Kafka). Ensuring secure communication, encryption, and access control across the stack. Leveraged Kubernetes and Docker to containerize microservices, ensuring fast deployment and consistent environments for data lake applications. Set up real-time monitoring of Cassandra clusters using Prometheus, Grafana, or Datadog. Provisioned and managed infrastructure using Bicep and Terraform to deploy scalable, event-driven workloads with Azure Container Apps. Integrated Azure Monitor and Log Analytics with Azure Container Apps to enable proactive monitoring, performance tuning, and alerting. Implemented automated security scanning pipelines using Tenable integrated with AWS CodePipeline and AWS Lambda for continuous vulnerability assessment. Implemented VM backup, snapshot, and disaster recovery strategies using native OpenShift virtualization features and persistent storage backends. Integrate various monitoring tools and platforms (e.g., Prometheus, Grafana, ELK Stack, Datadog) to create a cohesive Observability ecosystem. Managed and monitored VM performance and availability using Prometheus, Grafana, and OpenShift Monitoring stack; reduced downtime by 30% through proactive alerting. Enforced security best practices for VMs by configuring SELinux, TPM, and secure boot policies; audited access using OpenShift RBAC and compliance tooling. Developed automation scripts in Bash and Python for VM cloning, image management, and cluster maintenance tasks, reducing manual effort by 50%. Led the migration of legacy applications from VMware to OpenShift Virtualization, reducing infrastructure costs and improving deployment agility. Collaborated with platform and application teams to design scalable, fault-tolerant hybrid cloud architectures using Red Hat OpenShift and KubeVirt. Integrated Teradata with reporting tools like Tableau, Power BI, and Looker to enable advanced analytics. Used IaC tools like Terraform to manage Teradata infrastructure and configurations. Monitored UNIX systems using tools like Nagios, Zabbix, or custom scripts to ensure optimal performance. Tuned kernel parameters, managed file systems, and optimized network configurations for enhanced system reliability. Proficient in Pivotal Cloud Foundry (PCF), with a background in installation, administration, and scalability of applications while ensuring high availability and reliability. Integrated Databricks with CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate deployment and testing processes for notebooks and jobs. Monitor and optimize BigQuery usage to manage costs effectively by tracking slots, query pricing, and storage usage. Automated data ingestion from S3, RDS, DynamoDB, and external sources into Redshift using AWS Data Pipeline and Lambda. Implemented and managed infrastructure to support Databricks clusters on cloud platforms like AWS, Azure, or GCP using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Worked on designing and implementing CI/CD pipelines for sea shark using Jenkins and Gitlab. Utilized Kubernetes and Docker for the runtime environment for the Continuous Integration/Continuous Deployment system to build, test, and deploy. Created Jenkins jobs to deploy applications to Kubernetes Cluster. Work closely with the Development and QA teams on architecture and technologies and Write scripts in JSON, Terraform, Groovy to manage and monitor different levels of the server stack: process monitoring, memory monitoring, disk usage, uptime monitoring. Configured VMs availability sets using Azure portal to provide resiliency for IaaS based solution and scale sets using Azure Resource Manager to manage network traffic. Deployed a managed Kubernetes cluster in Azure using Azure Kubernetes Service (AKS) and configured an AKS cluster through various methods including the Azure portal, Azure CLI, and template-driven deployment options such as Resource Manager Templates and terraform. Managed Kubernetes charts using Helm. Created reproducible builds of the Kubernetes applications managed Kubernetes manifest files and managed releases of Helm packages. Selecting the appropriate Azure service based on compute, data, or security requirements and leveraging Azure SDKs to interact with Azure services from your application. Worked on Azure IoT product development, duties which were involved in designing the new LoRaWan based embedded Linux hardware and software. Experienced on Powershell scripts to automate the Azure cloud system creation including end-to-end infrastructure, VMs, storage, firewall rules Spearheaded the adoption and implementation of IBM OpenShift for container orchestration, significantly improving application scalability and deployment efficiency. Configured and managed automated backup solutions, ensuring compliance with Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO). Converting existing Terraform modules that had version conflicts to utilize Cloud formation templates did work with Terraform to create stacks in AWS and updated the Terraform scripts based on the required basis. Develop custom tools and scripts to enhance observability and monitoring capabilities. Leveraged Docker Engine to run Multiple Tomcat Instances as Containerized App Server. Proficient in designing and implementing secure, scalable, and highly available serverless applications on AWS using frameworks such as Serverless Framework, AWS SAM, and AWS CDK. Involved CDK integrates seamlessly with other AWS tools, such as the AWS CLI and SDKs, for deployment. Automated Weekly releases with Jenkins for Compiling Java Code. Worked on AWS Lambda for Reverse DNS for a Private cloud. Provisioned Multiple AWS EKS clusters using Terraform Shared module and used Helm builder to do deployments where used ECR to store docker images and Nginx as reverse proxy. Experience with Azure Machine Learning service for model training and deployment. Managed AWS EKS cluster node pools with multi-type CPU/GPU instances used for machine learning jobs. Pipelined Application Logs from App Servers to Elasticsearch (ELK Stack) through Logstash. Supported 1000+ cloud instances and skilled in Cloud command line on both AWS and OpenStack. Automated deployment, scaling, and monitoring of AWS Glue resources using infrastructure-as-code (IaC) tools like AWS Cloud Formation or AWS CDK. Introduced automated deployment pipelines centered on PostgreSQL, enabling the smooth integration of database updates within CI/CD workflows. This initiative led to notable improvements in both development and operational efficiency. Conducted regular disaster recovery drills and failover tests to validate recovery procedures and minimize downtime. Created AWS RDS MYSQL DB cluster and connected to database through the AWS RDS MYSQL DB instance using the console. Managing the Infrastructure on Google cloud Platform using Various GCP services. Configuring and deploying instances on GCP environments and Data centers, also familiar with Compute. Deployed and optimized two tier .NET web application to Azure DevOps (ADO) to focus on development by using services such as Repos to commit codes, Test Plans to trigger application and unit test, deployed artifacts to App Service, Azure Application Insight collects health performance and usage data of the process Created CI/CD pipelines for .NET, python apps in Azure DevOps by integrating source code repositories such as GitHub, VSTS, and artifacts. Created deployment areas such as testing, pre-production, and production environment in Kubernetes cluster. Developed and maintained detailed DR runbooks and documentation to ensure smooth recovery during incidents. Setup Datadog monitoring across different servers and AWS services. Built Dashboards in Datadog to monitor the infrastructure, Configured and installed Splunk agents on top of Linux server and container to send logs to Splunk to collect logs. Integrating AWS Glue with other AWS services and third-party tools to build end-to-end data pipelines. Ensuring security, compliance, and reliability of AWS Glue environments through best practices and automation. Implemented automation for deployments by using YAML scripts for massive builds and releases. Implemented and written DB configurations for minimal AWS templates usage and optimize the final product. Responsible for monitoring cloud resource usage and costs across various services and environments by Utilizing tools like AWS Cost Explorer, Azure Cost Management, or third-party FinOps platforms, you would analyze cost trends, identify cost drivers, and implement strategies to optimize spending without compromising performance or reliability. Automated CI/CD pipelines with HashiCorp tools, enabling rapid deployment and minimizing downtime during infrastructure updates. Implemented a Continuous Delivery framework using Jenkins, Puppet, Maven, and Nexus in the Linux environment. Handled integration of Maven/Nexus, Jenkins, GIT, Confluence and Jira. Worked with JIRA for creating Projects, assigning permissions to users and groups for the projects & Created Mail handlers and notification Schemes for JIRA. Planned and performed the upgrades on Linux operating systems and hardware maintenance on HP and POWER servers like increasing memory, disk, replacing failed hardware. Work with development teams and business areas to plan future capacity requirements and hold regular meetings to review usage as well as create, revise, and report any new measurements required to manage mainframe or distributed environments. Created Splunk/Dynatrace dashboard for (APM) application performance monitoring. Experience with installation and configuration of Dynatrace monitoring tools and created email alerts and threshold values using Dynatrace for our environment. Created Splunk/Dynatrace dashboard for application performance monitoring. Working migrating from traditional MPPs (like Teradata and Netezza) to cloud platforms like AWS, Snowflake and Databricks. Environment: AWS, Azure, GCP, PCF, Kubernetes, Ansible, Groovy, Terraform, Splunk, Dynatrace Suricata, GIT, GITLAB, JIRA, JFrog Artifactory, Confluence, Python, Apache, Redis, API, TCP/IP, Grafana, Postman, Data Dog, Helm Charts, Puppet, and DNS. Southwest Airlines TX Jun 2021 to Jan 2023 Senior Cloud Engineer/SRE Responsibilities: Experience in dealing with Windows Azure IaaS - Virtual Networks, Virtual Machines, Cloud Services, Resource Groups, Express Route, VPN, Load Balancing, Application Gateways, Auto-Scaling and Traffic Manager. Experience in configuring Azure web apps, Azure App services, Azure Application insights, Azure Application gateway, Azure DNS, Azure Traffic manager, Azure Network Watcher, Implementing Azure Site Recovery, Azure Backup and Azure Automation. Deploying the Virtual Machines with the Microsoft Monitoring Agent / Operational Management Suite (OMS) Extension using the PowerShell Scripts. Created job chains with Jenkins Job Builder, Parameterized Triggers, and target host deployments. Utilized many Jenkins plugins and Jenkins API. Integrated testing automation into CI/CD pipelines to ensure quality and compliance with Agile principles. Built end to end CI/CD Pipelines in Jenkins to retrieve code, compile applications, perform tests and push build artifacts to Nexus Artifactory. Worked in batch and streaming ETL using Spark, Python, Scala on Databricks/azure synapse for Data Engineering, Machine Learning workloads. Designed and orchestrated complex ETL/ELT pipelines in Azure Data Factory (ADF) to move and transform data across on-prem and cloud systems. Integrated ADF with Azure Blob Storage, Azure SQL Database, and Synapse Analytics to support enterprise-scale data ingestion and transformation workflows. Created CI/CD pipelines for .NET Core, python apps in Azure DevOps by integrating source code repositories such as GitHub, VSTS, and artifacts. Created deployment areas such as testing, pre-production, and production environment in Kubernetes cluster. Azure SDKs and libraries are available for .NET Core developers to facilitate seamless integration with Azure services. Automated cluster provisioning, scaling, and maintenance using Terraform, Ansible, or Kubernetes. Implemented monitoring solutions (e.g. Prometheus, Grafana) for Node.js applications, providing real-time insights into performance, resource utilization, and potential issues. Expertise in Azure Infrastructure as Code (IaC) tools such as Azure Resource Manager (ARM) and terraform for creating and managing cloud resources. Implemented and managed CI/CD pipelines for iOS applications, ensuring that the app is built, tested, and deployed efficiently, using tools like Fastlane, Xcode Server, etc. Configured Amazon CloudWatch for real-time Redshift cluster monitoring, including disk usage, query performance, and I/O throughput. Automated data ingestion from S3, RDS, DynamoDB, and external sources into Redshift using AWS Data Pipeline and Lambda. Implemented cross-region replication and snapshot management in AWS for cloud-based disaster recovery. Successfully implemented and configured Harness for various projects, optimizing deployment workflows Designed and implemented framework using Python and generating Terraform templates to create security groups in GCP. Collaborated with development and QA teams to support UNIX environments during application deployment. Documented best practices and troubleshooting guides for UNIX-based systems. Implemented blue-green deployment strategies in Harness to minimize downtime and ensure seamless releases Worked as Engineer and implement scalable analytics solutions on Azure (focusing on Databricks) Knowledge of Azure Data Factory or Databricks scheduling features to automate Databricks jobs. Implemented secure container lifecycle practices including image scanning, private container registry access, and managed identity authentication for Azure services. Worked on the CI/CD pipeline using tools like Jenkins, GitLab CI/CD, or CircleCI, integrating UCD into the workflow. Created Jenkins Workflows for advanced deployment process (DB execution, Environment configuration changes etc.) on both QA and pre-production Environments. Deployed monitoring solutions for Teradata using Teradata Viewpoint or third-party tools like Prometheus and Grafana. Worked on converting existing AWS infrastructure to server less architecture, deploying via Terraform formation templates. Worked in Cloud Computing Models like IAAS, PAAS, and SAAS. Experience with AWS CDK, it provides a library of reusable constructs for AWS services, making it easier to define cloud resources using high-level abstractions. Implemented centralized logging using ELK Stack or Amazon OpenSearch Service for ML workflows and infrastructure. Compute engine, cloud storage, Big Query, VPC, Stack Driver, Load Balancing and IAM. Built Dashboards in Datadog to monitor the infrastructure, Configured and installed Splunk Worked on several Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry. Creating Docker images analyzing various Jenkins Metrics and provisioning them in a container orchestration platform Mesos. Integrated real-time monitoring and alerting systems to detect incidents and initiate recovery processes promptly. Experience with installation and configuration of Dynatrace monitoring tool. And created email alerts and threshold values using Dynatrace for our environment. Automated Vertica deployment and schema updates using Jenkins, GitLab CI/CD, or custom scripts. Used IaC tools like Terraform to manage Vertica infrastructure across environments. Skilled in implementing CI/CD pipelines, automating the build, testing, and deployment procedures for Pivotal Cloud Foundry (PCF) applications, and seamlessly integrating PCF deployments with source control systems to enhance delivery efficiency. Provisioned Multiple EKS clusters using Terraform Shared module and used Helm builder to do deployments where used ECR to store docker images and Nginx as reverse proxy. Collaborated with engineering teams to embed DR best practices into CI/CD pipelines and operational workflows. Worked with Docker and Kubernetes on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either on public or private cloud. Created, managed, and performed container-based deployments using Docker images containing Middleware and Applications together and Evaluated Kubernetes for Docker container orchestration. Utilized Kubernetes and Docker for the runtime environment for the Continuous Integration/Continuous Deployment system to build, test, and deploy. Created Jenkins jobs to deploy applications to Kubernetes Cluster. Expert in using AWS CLI to automate backups to S3 and EBS. Create nightly AMI s for critical production servers. Deployed a managed Kubernetes cluster in Azure using Azure Kubernetes Service (AKS) and configured an AKS cluster through various methods including the Azure portal, Azure CLI, and template-driven deployment options such as Resource Manager Templates and terraform. Proficient in designing and implementing scalable and high-performance infrastructure solutions on GCP and OpenShift, utilizing load balancing, caching, and auto scaling techniques Strong documentation skills, including documenting Kafka cluster configurations, best practices, and operational procedures, enabling smooth knowledge transfer and onboarding of new team members. Collaborated with cross-functional teams to design and implement CI/CD pipelines utilizing IBM OpenShift, reducing release cycles. Automated CICD pipelines and build infrastructure using Terraform, CloudFormation, Groovy, yml, Bash & Python scripting for AWS Lambda which helps the non-CICADA users in with one click automation. Creating Lambda function to automate snapshot back up on AWS and set up the scheduled backup. Managed local software repositories such as Gitlab, Stash, Artifactory, and Nexus to oversee version control and artifact management. Ensured seamless integration of DR capabilities into application lifecycle processes, supporting continuous improvement and scalability. Integrated Jenkins with Jira, Atlassian Tool and GitHub for streamlined software development processes. Skilled in developing custom Splunk apps and dashboards, leveraging Splunk's REST API, Splunk SDKs, and web frameworks to extend Splunk's functionality and meet specific business requirements. Worked on deployment automation of all the Microservices to pull an image from the private Docker registry and deploy to Docker Swarm Cluster. Integrated Backstage.io with other tools and services to create a seamless developer experience. Wrote Ansible playbooks from scratch in YAML. Installing, setting up & Troubleshooting Ansible, created and automated platform environment setup. Worked on Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, GITLAB, Docker, on GCP (Google Cloud Platform). Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy. Wrote Chef Cookbooks and recipes in Ruby to provision pre-prod environments consisting of Cassandra DB installations, WebLogic domain creations and several proprietary middleware installations. Environment: Azure, Jenkins, Chef, Nagios, ADF, PCF, Terraform, Splunk, Java/J2EE, .NET GIT, GitHub, GITLab Bamboo, WebLogic, Docker, Nexus, GCP, Python, Bash, Chef Server, Tomcat, Grafna, nginx, CentOS, Unix, JIRA, Sonar. Centene Corporation St Louis, Missouri Nov 2019 to May 2021 AWS DevOps Engineer/SRE Responsibilities: A technical design team member involved in Build and Release module development of new products. Collaborated with DevOps team and responsible for specialization in chef for Cloud automation. Developed automated processes for builds and deployments using Jenkins. Created Release Plan, Definition, collection, analysis & presentation of Release Project Metrics on weekly basis. Involved in deploying systems on Amazon Web Services Infrastructure services EC2, S3, RDS, SQS, Cloud, Formation Applied AWS DMS tools to migrate on-premises databases to the cloud. Created Python script to manage resources deployed on AWS using AWS API calls. Maintaining 5-6 testing environments and erect production environments in AWS. Automated Redshift infrastructure provisioning and scaling using Terraform or AWS CDK. Implemented CI/CD pipelines for Redshift schema and data deployment. AWS server provisioning using chef recipes. Implemented automated builds on QA and Development servers on node server environments using cookbook modules. Developed, reviewed, and maintained comprehensive application design and architecture documentation to ensure systems adhered to disaster recovery (DR) best practices. Created Terraform to manage resources deployed on AWS using AWS API call. Provided automated solutions and installed & configured Jenkins/Hudson for automated deployments. Established build process using Jenkins for Continuous integration and used Jenkins to automate deployments and Enterprise scale infrastructure configuration. Used Datadog s dashboarding capabilities to visualize RUM metrics such as load times, user actions, and errors. Experience in Set up custom dashboards that correlate RUM data with backend and infrastructure metrics. Experience in managing VMware Virtual Hardware, Memory, NIC and VDISK. Migrated VMWARE VMs to AWS and Managed Services like EC2, S3 Bucket, Route53, ELB, EBS Etc with Ansible. Written scripts in Python to automate log rotation of multiple logs from web servers. Participated in disaster recovery testing exercises, analyzed outcomes, and implemented improvements for enhanced recovery efficiency. Creating NAT and Proxy instances in AWS and managing route tables, EIP s and NACLs. Created a private cloud using Kubernetes that supports DEV, TEST, and PROD environments. Experience in package and patches management on Linux and Solaris servers, firmware upgrades and debugging. Build and Release management using configuration tools Jenkins. Supported physical servers and virtualized servers. Collaborated with other engineers to support SANs (NetApp, Pure Storage) and VMWare hosts. Environment: AWS, Jenkins, Chef, Ansible, GCP, Python, Kubernetes, Helm, ELK Stack, GIT, Data Dog, Grafana, VMware, NetApp, Pure Storage. Edward Jones St. Louis, MO Apr 2018 to Oct 2019 AWS/DevOps Engineer Responsibilities: Worked on AWS Lambda for Reverse DNS for a Private cloud. Migrated VMWARE VMs to AWS and Managed Services like EC2, S3 Bucket, Route53, ELB, EBS using Ansible automation. Creating and managing AWS services like IAM, EC2, VPC, S3, EBS, ELB, ECS, ECR Utilize and modify Jenkins Pipeline build for automating the creation of application deployments. Wrote and implemented automation scripts for deploying and configuring new servers, reducing the time to provision a new server. Designed, implemented, and managed a CI/CD pipeline for a new product resulting in a 2x decrease in the time to release a new product version. Collaborated & strategized in designing a Terraform and deploying it in cloud deployment manager to spin up resources on Google Cloud Platform (GCP) services like compute engine, cloud SQL, cloud load balancing, Storage, Networking services, Disks, VPC, GKE, Pub/Sub, Cloud NAT. Designs AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and security product templates. A technical design team member involved in Build and Release module development of new products. Collaborated with DevOps team and responsible for specialization in Jenkins automation. Developed automated processes for builds and deployments using Jenkins. Involved in deploying systems on Amazon Web services Infrastructure services EC2, S3, RDS, Cloud, Formation Applied AWS DMS tools to migrate on-premises databases to the cloud. Expert in using AWS CLI to automate backups to S3 and EBS. Create nightly AMI s for critical production servers. Created Terraform to manage resources deployed on AWS using AWS API call. Established build process using Jenkins for Continuous integration and used Jenkins to automate deployments and Enterprise scale infrastructure configuration Experience in managing VMware Virtual Hardware, Memory. Migrated VMWARE VMs to AWS and Managed Services like EC2, S3 Bucket, Route53, ELB, EBS etc. with Ansible. Build and Release management using configuration tools Jenkins. Environment: AWS, Jenkins, Ansible, Terraform, AWS CLI, AWS Lambda, VMware, GCP, EC2, S3, Route53, ELB, EBS, CloudFormation, DMS, CI/CD, Python, GIT. Mindtree, Bellevue, WA Feb 2016 to Mar 2018 Linux Administrator Responsibilities: Installation, configuration, and upgrade of Apache, JBOSS, Web sphere, MQSeries, Oracle & IBM Databases on the RHEL and OEL Linux Systems manually and through Puppet. Remote system administration using tools like SSH, Telnet, and Rlogin. Planning and implementing system upgrades including hardware, operating system, and periodical patches. Applied appropriate support packages/patches to maintain system integrity. Performed capacity analysis, monitored, and controlled disk space usage on systems. Monitored system activities and fine-tuned system parameters and configurations to optimize performance and ensure security of systems. Pushing files, updates using CFEngine. Creating profiles manifests for new servers and pushing them to servers using puppet. Responsible for maintenance of development tools, utilities and to maintain shell, Perl automation scripts. Creating virtual machines using Xen and deploying OS, managing hardware. Creating, installing, WebLogic Application servers, deploying WAR, JAR files on them installed on both Linux, Solaris servers. Installing, configuring JBoss 4.3 on Linux, UNIX servers, integrating it with Apache server. Monitoring of various services using Service Management Facility and Service Administration. Installing, configuring Veritas NetBackup 6.5 on Linux, Solaris servers, creating backup policies. Performed capacity analysis, monitored, and controlled disk space usage on systems. Monitored system activities and fine-tuned system parameters and configurations to optimize performance and ensure security of systems. Environment: Linux, Apache, JBOSS, WebSphere, MQSeries, Oracle, IBM Databases, RHEL, OEL, Puppet, SSH, Telnet, Rlogin, System Upgrades, Patch Management, Capacity Analysis, Disk Space Monitoring, CFEngine, Puppet Manifests, Shell Scripting, Perl Automation, Xen, WebLogic, WAR, JAR, Solaris, Veritas NetBackup 6.5, Service Management Facility, Service Administration. CSX, Florida, FL Jan 2014 to Jan 2016 Linux Administrator Responsibilities: Installation, configuration, backup, recovery, maintenance, and support of RedHat Linux and Solaris. Troubleshooting Day-to-Day issues with various Servers on different platforms. File system Administration, Setting up Disk Quota, solutions on SAN/NAS storage. Configuration and administration of Clustered servers on SUSE Linux environment. Configuring backups on newly built servers and monitoring failed backups. Install and configure Ubuntu, Centos on remote servers and Desktop servers. Installation of patches and packages using RPM and YUM in Red hat Linux. Installed and configured Apache/Tomcat web server. Provided 24x7 on-call support in debugging and fixing issues related to Linux and Solaris. HP-UX Installation/Maintenance of Hardware/Software in Production, Development. Responsible for maintenance of source control systems Subversion and GIT. Created repositories according to the structure required with branches, tags, and trunks. Created hooks and managed permissions on the branches for GIT. Responsible for designing and deploying best SCM processes and procedures. Installed and configured SSH server on red hat/Cent OS Linux environments. Created and maintained several build definitions and published profiles to manage automated builds in an Agile/SCRUM environment. Attended sprint planning sessions and daily sprint stand-up meetings. Configured application servers (JBOSS) to deploy the code. Environment: RedHat Linux, Solaris, SUSE Linux, SAN/NAS, Clustered Servers, Ubuntu, CentOS, RPM, YUM, Apache, Tomcat, HP-UX, Subversion, GIT, SSH, Agile/SCRUM, JBOSS. Keywords: cprogramm continuous integration continuous deployment quality analyst machine learning user experience message queue javascript business intelligence sthree database information technology hewlett packard microsoft Florida Missouri Texas Washington |