0% found this document useful (0 votes)
97 views10 pages

DevOps Engineer with AWS & CI/CD Expertise

Sree Charan Reddy is a seasoned DevOps Automation Engineer with over 10 years of experience in IT, specializing in cloud management, CI/CD, and configuration management using tools like Ansible, Jenkins, and Terraform. He has extensive expertise in AWS and Azure cloud services, along with a strong background in software development life cycle and automation of deployment processes. His professional experience includes roles at Accenture and TikTok, where he managed cloud infrastructure, optimized CI/CD pipelines, and implemented scalable architectures.

Uploaded by

Panditi DivyaRaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views10 pages

DevOps Engineer with AWS & CI/CD Expertise

Sree Charan Reddy is a seasoned DevOps Automation Engineer with over 10 years of experience in IT, specializing in cloud management, CI/CD, and configuration management using tools like Ansible, Jenkins, and Terraform. He has extensive expertise in AWS and Azure cloud services, along with a strong background in software development life cycle and automation of deployment processes. His professional experience includes roles at Accenture and TikTok, where he managed cloud infrastructure, optimized CI/CD pipelines, and implemented scalable architectures.

Uploaded by

Panditi DivyaRaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Sree Charan Reddy

SR. DevOps Automation Engineer● Build & Release Engineer● AWS Engineer ● Site Reliability Engineer

Professional Summary:

Around 10+ Years of experience in IT industry as a Build/Release/Deployment/Operations (DevOps) Engineer,


AWS with good understanding of the principles and best practices of SCM in Agile, Scrum, Waterfall
methodologies Specialist in Cloud Management.

 Expertise in configuration Management tools like Ansible, CHEF and PUPPET, CI/CD with Jenkins.
 Extensively worked with Version Control Systems GIT, SVN (Subversion), Gitlab and Bitbucket.
 Extensively used build utilities like MAVEN, ANT for building of jar, war and ear files.
 Experience in Build automation tools like Jenkins, Ant, Maven, Gradle, XCode, SonarQube, Nexus (artifact
repository).
 Expert in CHEF/PUPPET/Ansible as Configuration management tool, to automate the repetitive tasks, quickly
deploy critical applications, and enthusiastically managed the changes.
 Experience in Designing, Architecting and implementing scalable cloud-based web applications using AWS and
GCP.
 Experience in Automating the build and deployment, software release process/tools for various Java/ J2EE
applications.
 Experience using Docker containers along with Kubernetes as an orchestration.
 Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing,
Testing, and Implementing, Post-production analysis of the projects.
 Proficiency in multiple databases like MarkLogic, Cassandra, MySQL, ORACLE and MS SQL Server.
 Ability in deploying the developed code in Apache Tomcat/JBOSS, IIS7, WebSphere, WebLogic.
 Experienced using different log monitoring tools like Splunk, Dynatrace, Nagios, PRTG, ELK (Elasticsearch,
Log stash, Kibana) for to see logs information, monitor, get the health & security notifications from nodes
 Expertise in Querying RDBMS such as Oracle, SQL Server using SQL, PL/SQL for data integrity.
 Experience working with AZURE cloud services (PAAS & IAAS), Azure monitoring, Azure portal, Azure
IoT, Azure SQL, AKS, Azure IoT, Visual Studio, etc.,
 Cloud adoption and containerization cryptography, Network FS & OSI Security & Identity, Message Channel
Security, Data Encryption Security, API Gateway Security, SQL Injection Security, Webservices Security.
SSL, IAM, PKI., Antivirus, IDS/IPS, Firewalls.
 Expertise in writing ansible playbooks to automate installation of Middleware infrastructure like Apache
Tomcat, JDK and configuration tasks for new environments etc.
 Used Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to
Kubernetes, Created Pods and managed using Kubernetes
 Got good knowledge with Mesos but worked mostly with the Docker and Kubernetes
 Migrated Applications from Bare Metals to AWS using AWS Server Migration Service, AWS Database
Migration Service, Direct-Connect, and Snow ball.
 Experienced in Maintaining the Hadoop cluster on AWSEMR.
 Experienced in implementing Git tasks automation using Python and Golang.
 Experience in developing Microservice's and deploying on Docker containers using Kubernetes.
 Implement CI/CD tools Upgrade, Backup, Restore, API call, DNS, LDAP and SSL setup.
 Experience in using bug tracking systems like JIRA, Remedy and HP Quality Center.
 Good Experience in working with Hashicorp tools (Terraform, Consul, Nomad, Vault).
 Experience in developing and deploying IaaS, PaaS, SaaS applications within popular public cloud platforms
(AWS, Azure, Oracle cloud, GCP etc.) and using tools such as Git, OpenShift, Kubernetes, and Docker.
 Proficient in deploying applications that uses NoSQL or similar RDBMS.
 Configured VM's availability sets using Azure portal to provide resiliency for IaaS based solution and scale sets
using Azure Resource Manager to manage network traffic.
 Hands-on experience on AWS and its infrastructure including EC2, IAM, ECS, Elastic Cache, Elastic Search,
Relational Database Service(RDS), Redshift, VPC implementation, IAM, KMS, WAF, Cloud Trail,
CloudWatch, S3, Cloud Front, AWS CLI scripting, ELB, Data Lake, Glacier, Route 53, Lambda, AWS
IOT, Dynamo DB, Elastic Beanstalk, SQS, SNS and Security group management.
 Setting up scalability for application servers using command line interface for Setting up and administering DNS
system in AWS using Route53 Managing users and groups using AWS Identity and Access Management
(IAM).
 Defined AWS Security Groups which acted as virtual firewalls that controlled the traffic allowed reaching one or
more AWS EC2, Lambda instances.
 Strong proficiency in supporting Production Cloud environments (AWS, Azure, and VMWare) as well as
traditional managed hosted environments.
 Deployed AZURE IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure VNets
and Subnets.
 Developed POCs for the use of Hashicorp Products Consul and Nomad using AWS/Terraform.
 Provided support on Production & Non-Production environment on JBoss, IBM WebSphere.
 Ability to write scripts in Bash, shell, Ruby and Python scripting languages.
 Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and
maintenance of build jobs and Kubernetes deploy and services.
 Building/Maintaining Docker container clusters managed by Kubernetes. Utilized Kubernetes and Docker for
the runtime environment of the CI/CD (continuous Integration and continuous deployment) system to build, test
deploy.
 Excellent communicative, interpersonal, intuitive, analysis and leadership skills with ability to work efficiently in
both independent and teamwork environments.

Technical Skills:

Operating Systems Windows, UNIX, LINUX, UBUNTU, RHEL, Centos


Source control tools Subversion, GIT, Github, Gitlab and Bitbucket
Build Tools ANT, MAVEN, Gradle, Packer
CI/CD Tools Hudson/Jenkins, Bamboo, Gitlab actions, VSTS and Build Forge
Repositories Nexus, Artifactory.
Automation Tools PUPPET, CHEF, Ansible, Terraform
Cloud Computing AWS, Open Shift, Microsoft Azure, GCP, Oracle cloud
Languages Shell, Groovy, Python, Java/J2EE, Golang
Tracking tools Atlassian JIRA, BMC Remedy, Bugzilla.
Web servers Web logic, Web Sphere, Apache Tomcat, JBOSS.
Databases Oracle, SQL SERVER, MySQL, GraphQL, MarkLogic.
Containers Docker, Kubernetes, GKE, Docker Swarm, Openshift
Monitoring Platforms Nagios, Splunk, CloudWatch, NewRelic, Dynatrace, ELK

Professional Experience:
Accenture federal Services, VA Oct 2024 – Present
Sr. Cloud Infrastructure/DevOps engineer

Responsibilities:
 Managed and optimized AWS networking services including VPC, Transit Gateway, Route 53, Network
Firewall, and Load Balancers to ensure secure and scalable connectivity across environments.
 Designed and maintained modular Terraform codebases with remote state (S3, DynamoDB), workspaces,
and dynamic variables to provision consistent and reusable infrastructure.
 Automated Terraform deployments through CI/CD pipelines, improving provisioning speed and reducing
manual input errors.
 Administered and upgraded Red Hat Enterprise Linux systems (RHEL 7 to 8.10), ensuring system
availability and compliance with latest standards.
 Created and maintained Ansible playbooks to automate system configuration, patching, and deployment
tasks across environments.
 Utilized AWS services including EC2, ELB, IAM, RDS, Lambda, S3, CloudFormation, CloudTrail,
CloudFront, EFS, EMR, SQS, and SNS for scalable infrastructure and automation workflows.
 Integrated Symantec Protection Engine (Symantec PE) into an in-house file scanning application to enable
real-time virus/malware detection.
 Designed and implemented scalable architecture using Amazon SQS to queue file scan requests and
ensure asynchronous, fault-tolerant processing.
 Leveraged Amazon SNS for event-driven notifications to downstream services and stakeholders on scan
completion, failures, or quarantine actions.
 Developed microservice-based components for file ingestion, scanning, and reporting, improving
modularity and maintainability.
 Collaborated with cross-functional teams to enhance features without altering core functionalities.
 Automated infrastructure activities, including deployment, application server setup, and stack monitoring, by
integrating Ansible with Jenkins.
 Served as a technical lead, mentoring junior engineers and driving an automation-first culture to reduce
manual processes.
 Translated legacy CloudFormation templates to Terraform, standardizing infrastructure provisioning.
 Responsible to drive, support, troubleshoot the deployments of the applications to TEST(QA), Pre-
Production (IMPL) and Production Environments.
 Led a POC on OpenShift, deploying containerized applications to validate scalability, resiliency, and
microservices architecture.
 Configured projects, routes, persistent storage, and RBAC policies to ensure secure and high-availability
deployments.
 Performed performance benchmarking and resource optimization, providing recommendations for
production readiness.
 Documented POC findings, best practices, and deployment templates to guide future OpenShift initiatives.

Environment: AWS, Ansible, Jenkins, MarkLogic, Terraform, Splunk, Jboss, Apache, Haproxy, Rhel8, AL2,
Confluence, Jira, SharePoint, NewRelic, GitHub, CloudWatch, Maven, Gradle, Nexus, Redhat-Openshift

TikTok Inc, CA Feb 2022 – Sep 2024


Site Reliability Engineer
Responsibilities:
 Handled on-call responsibilities for multiple services like Bytecycle, Goofy Deploy, Release Manager
delivering prompt and effective solutions to user issues and ensuring high availability and reliability.
 Developed and maintained Terraform scripts to provision alarms for the Luban cluster in OCI, automating
monitoring and enhancing infrastructure reliability
 Coordinated with the CN teams to synchronize ROW (Rest of World) version after successful user testing,
ensuring smooth and consistent deployment of updates in ttp region while maintaining system reliability.
 Collaborated multiple teams while working on software assurance project and upgrading JDK to latest version
for jdk8 security vulnerabilities.
 Conducted regular syncups with CN counterparts to propose new features/bug fixing based on user requests
and performance issues.
 Well experienced in troubleshooting any part of lifecycle services within the Bytecycle including RabbitMQ,
Bytedoc, MySQL RDS, TCE, TLBs, etc.,
 Worked on BITS TTP review center with R&D team to allow ttp tickets from ROW region by relying on
USTS team members.
 Troubleshooted various issues for multiple atoms in Bytecycle and provided the solutions to end users to have
seamless deployment for their micro-services.
 Worked on syncing up Libra layers from ROW region to TTP region so that users can use A/B testing using
Release Manager before they upgrade to latest versions.
 Setup RM pipelines with right configurations for TCE cluster to allow traffic from TLB to Experimental
group (latest version) and controlled group (base version).
 Worked with multiple atom developers to support their needs and publish the atom to bytecycle market
which can be used in the template configurations.
 Developed and customized Grafana dashboards to monitor key metrics for various system components
within Goofy deploy, Release Manager enabling real-time insights
 Managed database tasks including creating and altering tables and provided essential information to support
efficient troubleshooting and issue resolution.
 Implemented Dynatrace in the bytecycle as POC in order to provide better metrics to NOC team.
 Worked with CN team on troubleshooting routing issues for goofy deploy and bytecycle with the
implementation of operational gateway in ttp environment.

Environment: Oracle cloud, Jenkins, SCM, TCE, Release Manager, TLB, Kubernetes, Goofy deploy,
Bytecycle, Bytedoc, Mysql, Grafana, RabbitMQ, Codebase/Gitlab, Dynatrace.

Accenture federal Services, VA June 2019 – Jan 2022


Sr. Systems/DevOps engineer

Responsibilities:
 Worked on various middleware roles for installing & configuring Redhat Apache, Jboss servers, and
Haproxy.
 Created Jenkins file for multi branch pipeline and to run the jobs parallel on various components at the same
time rather than waiting in a queue which decreased the total deployment time.
 Worked with testing team to automate the regression testing using Gradle builds& Jenkins to run regression
Job with the help of API calls.
 Worked with database team on automating the Marklogic Newrelic plugin for monitoring the Marklogic
Servers CPU %, payload, no. of calls made from jboss batch servers.
 Worked on updating the playbooks and roles from Ansible-2.3 to Ansible-2.9.5 version.
 Worked on upgrading the Linux OS from Centos 6 or Rhel 7 servers and created custom unit scripts for the
services like Apache, Haproxy, Newrelic-plugin-Haproxy, Newrelic-plugin Httpd.
 Worked with testing team to implement Selenium testing for the first time and created a Jenkins job for one-
click build/deployment.
 Built AMIs using packer code with the preinstalled components and integrated with terraform for
provisioning the servers.
 Worked on creating CA-Apigateway installation & configuration using ansible roles.
 Worked on automating Plan-data load process using ansible playbooks and Corb jobs (XQuery) which
decreased the total process time from 2 hrs (manual process) to 20 mins (new automated Jenkins job).
 Leveraged AWS S3 buckets for uploading objects from different AWS account and extracting the objects to
run the validation queries for the plan data load process.
 Worked on encryption of connection strings using Ansible Vault, Jasypt jar for the conf files and Jasypt
properties file to decrypt while making the XQuery calls.
 Maintenance of all logs and working on issues and ensure resolutions according to quality assurance tests
for all production procedures.
 Performing start and stop of app instances, components and queue managers as a part of troubleshooting and
also as pre-tasks in order to avoids exceptions, errors, timeouts observed while processing the data.
 Continuous testing in lower test environments of the applied changes to check for any bugs/failures that may
happen in future deployments.
 Code cleanup/review from GitHub repository of the features that may have been deprecated or removed.
 Collaborated in cross functional team meetings for any enhancement of new features without changing the
functionality of the applications.
 Working on Automation of various infrastructure activities like continuous deployment, Application Server
Setup, stack monitoring using ansible playbooks and has integrated Ansible with Jenkins.
Environment: AWS, Ansible, Jenkins, MarkLogic, Terraform, Splunk, Jboss, Apache, Haproxy, Rhel7, Active
Directory, Confluence, Jira, SharePoint, NewRelic, GitHub, CloudWatch, Maven, Gradle, Nexus

Thales Avionics, Melbourne, FL Nov 2018 - May 2019


AWS DevOps Engineer

Responsibilities:
 Experience running LAMP (Linux, Apache, MySQL, and PHP) systems in agile quick scale cloud
environment.
 Expertise in developing Ansible playbooks for AWS environment.
 Implemented Ansible-ARA on our production environment.
 Actively worked with various development teams on multiple tasks.
 Created POC for molecule based on Ansible which changes to TDD (Test Driven Development) environment.
 Created HDP, HDF clusters on AWS using ansible scripts.
 Enabled Molecule on Bamboo where it checks for errors when code pushes into Bitbucket and merges to
develop branch.
 Checking disk space utilization of UNIX and Linux servers and clearing them to keep application running
smoothly.
 Created and configured Ansible playbooks to automatically install packages from a repository, to change the
configuration of remotely configured machines and to deploy new builds and configured with Ansible server so
that other users can run them with just a push of a button.
 Performed cost analysis for the HDF & HDP clusters and compared between AWS & Azure.
 Designed Ansible playbook for MsSQL silent installation and configuration with zero downtime on windows
2012/2016.
 Written playbooks to manage IAM roles and security groups related to AWS.
 Installed Ansible Registry for local upload and download of Docker images and even from Docker hub.
 Developed Ansible playbooks to test connectivity, install rpm's and for various other purposes across various
Red Hat Linux machines.
 Created/managed S3 bucket on CLI and stored DB logs and backup.
 Setup CloudWatch for monitoring various AWS related services.
 Worked on different OS like RHEL, Centos, Ubuntu, Linux AMI, Windows2012R2 etc.,
 Used AWS cloud services to launch Linux and windows machines, created security groups and written basic
PowerShell scripts to take backups and mount network shared drives.
 Performed deployment of Amazon EC2 instances in AWS environment. Performed EC2 instances provisioning
on AWS environment and implemented security groups, administered VPCs.
 Installing and hosting was automated with AWS cloud formation and PowerShell scripts.
 Creating and handling multiple Docker images primarily for middleware installations and domain
configurations.
 Worked with automation testing team created Jenkins server and installed selenium grid and docker for test
purposes.
 Used Kafka to collect website activity & stream processing.
 Implemented Kafka Security features using SSL and Kafka cluster with Kerberos.
 Worked with PRTG Network Monitor and created scripts for adding devices to the PRTG server.
 Created custom sensors for ELK stack and AMQbrokers in the PRTG server.
 Built Hadoop servers and configuring them as Kafka/Storm Nodes per the Big data team requirement.
 Installed various bigdata tools such as Ambari, Zookeeper, Apache NiFi, Hadoop and Kafka.
 Performed S3 buckets creation, policies and on the IAM role based polices and customizing the JSON template.
Environment: AWS, Azure, Ansible, Python, PowerShell, OIDC, Waf, Bamboo, Jenkins, Jira, Apigee, Terraform,
Confluence, Windows, Linux, Active Directory, Docker, PRTG monitoring tool, Dynatrace, Barracuda, ELK stack,
Ambari, Apache Tomcat, JBoss, IAM.

Nationwide Insurance, Columbus, OH Dec 2016 - Oct 2018


SR. DevOps / AWS Automation Engineer

Responsibilities:
 Created and maintained continuous integration (CI) using tools Jenkins/Maven over diff environments to
facilitate an agile development process which is automated and repeatable enabling teams to safely deploy code
many times a day while ensuring operational best practices are supported.
 Used MAVEN as build tools on Java projects for the development of build artifacts on the source code.
 Design and document CI/CD tools configuration management.
 Responsible for orchestrating CI/CD processes by responding to Git triggers, human input, and dependency
chains and environment setup.
 Orchestrated and migrated CI/CD processes using CloudFormation and Terraform Templates and
Containerized the infrastructure using Docker, which was setup in AWS VPCs.
 Work with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change
Automation.
 Deployed web applications using Amazon CloudFront and configured AWSWAF in such a way that it protects
the deployed web application from SQL injection attacks.
 Involved in setting up a Microservices architecture for application development.
 Integrated microservices with Docker container based on Jenkins pipelines to all IoT services.
 Managed User access control, Triggers, workflows, hooks, security and repository control in Bitbucket.
 Created, Configured and Administered Jenkins servers with Master-slave configurations as needed.
AWSEC2/VPC/S3/Route53/IAM/Cloud Formation/ELB based infrastructure automation through CHEF,
Vagrant, Bash Scripts. Implemented CHEF to deploy the builds for Dev, QA and production.
 Used Terraform in AWS Virtual Private Cloud (VPC) to automatically setup and modify settings by
interfacing with control layer.
 Experience in supporting data analysis projects using EMR on the AWS Services AWS Cloud and exporting and
importing data into S3.
 Planned, implemented and managed Splunk monitoring and reporting infrastructure.
 Worked on Managing the Private Cloud Environment using CHEF.
 Enhancing existing framework by fixing open issues using Ready API with Groovy scripting tool
 Developed CHEF Cookbooks to install and configure Apache Tomcat, Jenkins, and Run deck and
deployment automation.
 Slimming and fine tuning the Enterprise JBoss Application server image and Deployment of application on
JBoss clusters.
 Using Jenkins AWS Code Deploy plugin to deploy to AWS.
 Created Continuous Integration and Continuous Delivery Pipelines for the build and deployment automation
in place.
 Build pipeline using VSTS to deploy applications into AWS cloud and manage release definitions on various
environments.
 Involved in configuring Continuous Integration (CI) from source control, setting up build definition within
Visual Studio Team Services (VSTS) and configure continuous delivery(CD) to AZURE web apps.
 Created Terraform scripts for EC2 instances, Elastic Load balancers and S3 buckets.
 Nodejs and Spring Boot applications are used for Micro services and Performing DevOps process.
 Configure VSTS to migrate applications from on premises to AWS cloud.
 Strong Experience in implementing Data Warehouse solutions in AWS Redshift Worked on various projects to
migrate data from on-premise databases to AWS Redshift, RDS and S3.
 Designed AWS architecture, Cloud migration, AWS EMR, Dynamo DB, Redshift and event processing using
lambda function.
 Implemented Golang to generate AWS CloudFormation templates.
 Wrote build automation scripts for SQL database maintenance using PowerShell, Groovy.
 Identifying and troubleshooting system outages, installation issues, and application code running JBoss
applications in a Linux.
 Setup WebLogic and JBOSS domains and configured various resources like JMS resources, key stores and trust
stores, SSL certs, heap settings, integration with Microsoft AD for authentication etc.
 Deployed and configured War & Jar files to AWS instance using ansible playbooks.
 Implement containerized workflows into customer environments through use of Docker tools and supporting
technologies such as Jenkins, Consul, and other open source codebases.
 Worked with CHEF Enterprise Hosted as well as On-Premise, Installed Workstation, Bootstrapped Nodes, Wrote
Recipes and Cookbooks and uploaded them to CHEF-server, Managed On-site
OS/Applications/Services/Packages using CHEF as well as AWS for EC2/S3/Route53 & ELB with CHEF
Cookbooks.
 Enabling Disk Encryption using Certificates for IaaS Virtual machines for OS and Data Volumes.
 Performed deployment of Amazon EC2 instances in AWS environment. Performed EC2 instances provisioning
on AWS environment and implemented security groups, administered VPCs.
 Experience in using Deployment tools like Build Forge, IBM Urban code/UDeploy.
 Added monitoring checks for critical failure points with Data Dog and AWS Lambda.
 Maintained and managed Nomad and consul key value storage for micro service architecture using Docker for
services.
 installed and configured JBoss on centos and Red hat server and deployed the multiple java applications on JBoss
server
 Used AWS cloud services to launch Linux and windows machines, created security groups and written basic
PowerShell scripts to take backups and mount network shared drives.
 Implemented Docker to provision slaves dynamically as needed. Created and Maintained Docker files in Source
Code Repository build images and ran containers for applications and testing purposes.
 Designed shell script for Redshift cluster shutdown/start up automation based on the snapshots

Environment: AWS, ANT, Jenkins, Git, Web Sphere, CHEF, JBoss Application Servers, VSTS, NodeJS,
PowerShell, Groovy, Lambda, Terraform, Splunk, Nagios, Apache Tomcat, Agile/Scrum, SDLC, Docker, Windows,
Linux.

Conoco Phillips, Houston, TX Nov 2015 - Dec 2016


Sr. DevOps Engineer

Responsibilities:
 Setting up the automation environment for Application team if necessary and help them through the process of
build and release automation.
 Used MAVEN as build tools on Java projects for the development of build artifacts on the source code.
 Developed Scripts for AWS Orchestration
 Created and maintained Ant [Link] and Maven [Link] for performing the builds.
 Automated Weekly releases with ANT/Maven scripting for Compiling Java Code, Debugging and Placing Builds
into Maven Repository.
 Created branches, performed merges in version control systems GIT, GitHub, SVN, Stash.
 Automated setting up server infrastructure for the DevOps services, using PUPPET/Ansible, shell and python
scripts.
 Integrated Splunk with a wide variety of legacy data resources and commercial security tools that use various
protocols.
 Experienced in monitoring servers using Nagios, Splunk, Cloud watch.
 Configure and ensure connection to RDS database running on MySQL engines.
 Leveraged "AZURE Automation" and PowerShell, and Ansible to automate processes in the AZURE Cloud.
Integration with Splunk for API traffic monitoring and health checks.
 Integrated Splunk with a wide variety of legacy data resources and commercial security tools that use various
protocols.
 Configured Azure Active Directory to manage users, groups and Deployed DSC from Azure Automation to On-
premises and cloud environments and enabling directory synchronization between Windows Active Directory and
Public Cloud directories like Azure Active Directory.
 Provided support on server administration Trouble shooting and Administration of JBOSS servers.
 Conceived, designed, installed and implemented PUPPET configuration management system. Automating in
Azure for creation of subscription, Storage Account and tables using Windows PowerShell.
 Experience in dealing with Windows Azure IaaS - Virtual Networks, Virtual Machines, Cloud Services,
Resource Groups, Express Route, Traffic Manager, VPN, Load Balancing, Application Gateways, Auto-Scaling.
 Build pipeline using VSTS to deploy applications into Azure cloud and manage release definitions on various.
 Completely responsible for automated infrastructure provisioning (Windows and Linux) using PUPPET Scripts.
 Responsible for automated installation of PUPPET Enterprise 2.7 and configuring PUPPET Master and
PUPPET Agents (both Windows and Linux Env) in AWS VPC environment.
 Responsible for automated installation of Software’s such as Java, Tomcat, Certify in PUPPET master and
PUPPET agents using PUPPET scripts.
 Utilize Cloud Formation and PUPPET by creating DevOps processes for consistent and reliable deployment
methodology.
 Deployed and hosted Web Applications in AZURE created Application insights for monitoring the Application.
 Responsible for automated deployment of java application in Tomcat Server using PUPPET scripts.
 Responsible for automated identification of application server and database server using PUPPET scripts.
 Automated Nagios services for database server, web-server, application-server, networks, file sizes, RAM
utilization, Disk performances using Python script in Puppet.
 Deployment and Configuration of application server Tomcat deploying Java and Web Application.
 Worked with Docker for convenient environment setup for development and testing.
 Developed Python, Perl and shell scripts, PowerShell on windows systems for automation of the build and release
process and also automate deployment and release distribution process with shell, and Perl etc.
 Working on Deployment procedures using middleware like Apache Tomcat, creating deploy scripts and setting
for the Production Release.
 Maintained Centos servers for any data overload and update user processors.
 Maintained the deployment properties for the ELK.
 Worked on the connectivity and Firewall issues for the installation and connectivity of the tools.

Environment: JAVA, PUPPET, GITHUB, Kinesis, Lambda, Stash, Apache Maven, Bamboo, Apache Tomcat,
JBoss, Splunk, Shell Script, SOAP, REST API, CHEF, VSTS, Terraform, Ansible, Linux, Nagios, MySQL,
Selenium, Windows, Active Directory, Atlassian JIRA, Cloud Foundry, Python, Perl, PowerShell, AWS, AZURE
(IAAS, PAAS, SAAS), DNS, Docker, Subversion.

Target Corporation – Minneapolis, MN Jan2015 - Oct 2015


Build Engineer

Responsibilities:
 Design and document CI/CD tools configuration management.
 Responsible for orchestrating CI/CD processes by responding to GIT triggers, human input, and dependency
chains and environment setup.
 Zabbix for CI/CD tools monitoring.
 Developed and implemented Software Release Management strategies for various applications according to the
agile process.
 Installed, Configured and Maintained RedHat Linux (RedHat Enterprise Linux 5.x, 6.x) on SPARC, x86
and Blade Centers.
 Developed build and deployment scripts using ANT and MAVEN as build tools in JENKINS to move from
one environment to other environments.
 Work with application development and operation teams using a variety of automated testing and build, and
deploy tools (MAVEN, Ant, Nexus, JENKINS, SVN, Selenium, JUNIT) to resolve issues with transition to
new DevOps solution.
 Migrated SQL Server database to Windows Azure SQL database and updated Connection Strings.
 Performed WebLogic Server administration tasks such as installing, configuring, monitoring and performance
tuning on Linux Environment
 Deployment and management of many servers via script and CHEF, utilizing cloud providers as a direct JIRA.
 Worked able to create scripts for system administration and AWS using languages such as BASH and Python
 Built and Deployed Java/J2EE to a web application server in an Agile continuous integration environment and
also automated the whole process.
 Created and maintained the Shell/Perl deployment scripts for Web Logic web application servers.
 Mange build and deployment using MAVEN, JENKINS, CHEF SCM tools
 Managed MAVEN project dependencies by creating parent-child relationships between projects.
 Used JIRA to track issues and Change Management.

Environment: Perl Scripting, Shell Scripting, Java, AWS, Azure, JENKINS, Nagios, JIRA, MAVEN, ETL, CHEF,
Vagrant, Linux, SVN, GIT, Gradle, PUPPET, Tomcat, Scrum, Python, Ant, Nexus.

Knoah Solutions, India July 2014 – Jan 2015


Role: Software Developer
Responsibilities:
 Involved in user interactions, requirement analysis and design for the interfaces.
 Prepared the design document for Document Management Module and User Management Module.
 Created class diagrams and sequence diagrams using MS Visio.
 Build and maintain Visio documentations for Clients.
 Followed waterfall methodology for application development.
 Updating the Perforce log properties for revisions. Setting up the Perforce sync servers, changed rev properties
for Perforce sync.
 Promoting the changes from trunk revisions to release branch.
 Routing Protocol (BGP4, OSPF, EIGRP, IGRP, RIP, IS-IS, NLSP), Routed Protocol (TCP/IP, IPX/SPX).
 Proposed and implemented branching strategy suitable for agile development in Subversion.
 Installed and configured Hudson for Automating Deployments and providing a automation solution.
 Integrated Subversion into Hudson to automate the code check-out process.
 Involved in development of custom tag libraries which provides various functionalities like check-in, check-out,
export, import, open, delete, search and etc. on JSP pages.
 Involved in development of customized Web applications on top of Orion Frameworks using Web technologies
like JSP, Servlets, Java script.
 Written Oracle Admin schema using SQL, which creates Orion Oracle database instances in Oracle 10g and
Oracle9i.
 Implemented various customized java beans, which creates Windows start up services for Storage Server and
Command Server, Updates registry entries, executes Oracle database schema, installs web based and console-
based applications.
 Involved in migration of code in CORBA framework to Java/J2ee framework.
 Having the responsibility to test the use case by writing the Unit test cases and having good knowledge in usage
of J-unit to test the application.
 Good experience in debugging the application by running the server in debug mode and having the experience in
usage of log4J to persist the data in log files.
 Have the responsibility as a UAT support for the project till the project goes into the production.

Environment: Java/J2EE, SQL, Perforce, Hudson, XML, MS Visio, Java Scripts, Log4J, CORBA Framework,
Windows XP, Linux.

EDUCATIONAL QUALIFICATIONS:
Bachelor’s in Computer Science Engineering JNTU, 2014.
Master’s in Engineering Management from Lamar University, 2017.

Common questions

Powered by AI

Performance benchmarking and resource optimization are critical for production readiness as they ensure applications can handle expected loads efficiently. Benchmarking identifies performance bottlenecks and allows for tuning resources to optimize application responsiveness and reliability. These practices ensure that applications are robust, scalable, and capable of providing consistent user experiences under varying loads, which is essential for maintaining competitive service levels .

CI/CD tools like Jenkins and Maven significantly impact software development processes by streamlining the build, test, and deployment phases. Jenkins automates the execution of tasks from code changes to deployment, facilitating continuous integration. Maven, used in conjunction with Jenkins, manages project dependencies and builds automation. Together, they enable faster release cycles, improve software quality, and reduce human intervention, thus increasing development efficiency and reliability .

Grafana dashboards serve a crucial role in monitoring infrastructure services by providing real-time insights into system components. Customizing these dashboards allows for targeted monitoring of specific metrics, which enhances performance tracking and troubleshooting capabilities. By visualizing data relevant to specific services, engineers can quickly identify issues and make informed decisions to maintain system health and performance .

Strategies to address security vulnerabilities in software assurance projects include regular security audits, vulnerability assessments, and timely updates to software components like JDK. Collaborating with teams to propose and implement security patches or upgrades, conducting security testing as part of the continuous integration process, and ensuring compliance with security standards are also essential. These practices mitigate risks and ensure software integrity and safety .

Translating CloudFormation templates to Terraform improves the standardization of infrastructure provisioning by leveraging Terraform's modular and version-controlled configuration capabilities. Unlike CloudFormation, Terraform allows for more flexibility with modular resources and the usage of HCL, which enhances readability and maintainability. This transformation enforces consistent deployment processes, simplifies infrastructure as code management, and reduces errors associated with manual configurations .

Terraform CI/CD pipeline automation significantly impacts the provisioning speed of cloud resources by automating infrastructure deployments. This automation reduces manual input errors and accelerates the provisioning process. Through CI/CD pipelines, infrastructure can be consistently deployed, tested, and managed, ensuring a faster setup that enhances development efficiency and responsiveness to changing resource demands .

The use of containers and Docker tools in DevOps enhances deployment processes by encapsulating applications and their dependencies, ensuring a uniform runtime environment. This uniformity simplifies the deployment process across different stages and environments, reducing compatibility issues. Docker's use in cloud environments allows for dynamic scaling and cost-effective resource utilization, making it easier to manage and deploy applications consistently .

Ansible is highly effective in automating middleware configurations and deployments in multi-environment setups due to its agentless architecture and declarative language. It simplifies the process of configuring and deploying middleware such as Redhat Apache and Jboss servers by using playbooks that describe system states. This automation ensures consistent configurations across environments, reduces setup time, and facilitates easier management and troubleshooting .

Integrating Ansible with Jenkins in DevOps pipelines enhances infrastructure automation efficiency by enabling automated deployment, application server setup, and stack monitoring. Ansible playbooks can automate various infrastructure activities, reducing manual errors and complying with standard configurations. Jenkins, a continuous integration tool, triggers these playbooks as necessary, allowing for seamless automated processes. This integration reduces the need for manual processes, accelerates deployments, and ensures consistent and reliable infrastructure management .

Using Puppet for infrastructure provisioning across Azure and AWS environments promotes automation consistency by standardizing the configuration and deployment processes. Puppet's configuration management capabilities ensure that infrastructure components are deployed uniformly, reducing discrepancies between environments. By automating these processes, organizations can achieve consistent environments that are easier to manage and reduce the risk of misconfigurations .

You might also like