DevOps Pipeline Approach for Cognizant® Cloud Acceleration Platform Insights

Functional Specifications

 

Objective

This document details the perspectives of establishing a DevOps pipeline for propagating changes to Insights application. The steps suggested below are advisory and may be modified as per the engineering process defined inside customer release environments. The intention of establishing the pipeline construct is to ensure that changes are propagated in a well-defined fashion with checks and controls in place similar to a customer facing application in production. Based on the interest, timelines and investments one or more of the pipeline stages can be automated based on the need.

 

Scope

The DevOps pipeline construct shall be applied only to Insights application components such as agents, jars, wars, jsons and other configurations. We advise you to separate the above mentioned application components from the 3rd party open source components that we use such as rabbitmq, neo4j, tomcat, python etc. These middleware components can be installed and managed by conventional tools like Chef, Puppet or be managed at the Cloud, Virtualization layers with tools like AWS Cloud Formation Templates, Azure ARM, Terraform etc.  These shall be kept outside the DevOps pipeline for VM based Insights Deployments. For container based Insights deployments, you may choose to build the middleware and app components together inside your environment and bring it inside your DevOps pipeline for seamless release.

List of Insights App components that are likely to change inside customer ecosystem are as follows.

App Components

Technology

Change Possibility

Agents

Python

Likely

App/Agents Configs

Json

Very Likely

App Engine

Java/J2ee

Never

App Platform Services

Java/J2ee

Never

Grafana + Neo4j Plugin

React

Never

UI

Angular

Less Likely

Grafana Dashboard

Json

Very Likely

 

Note - The term production Insights deployment mentioned in this document may not necessarily mean installation on customer B2B/B2C production environment. It can also include corporate network used for hosting internal applications. The intent is to earmark an Insights deployment that is controlled and separate from the development environment.

 

 DevOps Approach

 

  1. Lifecycle Management

In order to track, streamline and manage the list of stories, issues and enhancements, consider employing ALM tools like Jira, Rally, Azure ALM, etc. Based on the SDLC methodology that is prescribed by clients, you may choose to configure Scrums, Iterations or Kanban style delivery inside ALM tools to track and deliver Insights stories.

 

2. Source Code Repo

We strongly advice to create a dedicated repo for Insights to store, version and control the code and configuration changes that is applied to Insights application inside the customer environment. You may choose to follow any of the standard branching and margining strategy that would be suitable for your style of Insights development. All the Insights application components that you intend to change specifically for the customer such as agents code, configs, dashboard jsons, etc all need to be properly committed to the repo such as Git, TFS, BitBucket etc as applicable.

 

3. Build

In most Insights deployments, there may not be a need to build Insights as an application since most of the changes are either on the agents side (py) or on the dashboard side (jsons). The build stage is applicable for only those deployments that has made changes to java/j2ee components such as engine, platform services and Angular Insights UI app. For deployments that require a build, you may clone the respective code base from our public github repo inside your customer environment, configure maven dependencies and build the application. You are advised to choose additional steps inside build such as static code analysis, junit tests and coverage, security scans, etc as applicable or prescribed by your customer’s development processes. You can orchestrate the process with the CI setup practiced by your customer’s leveraging Jenkins, Bamboo, Azure Pipeline, etc. Please refer to our confluence page for detailed on how to build Insights Application inside customer environment for details. For Insights deployments that don’t need a build, you can skip the build stage.

 

4. Artifact Management

All changes to Insights application either build or Agent side py or config json changes should be archived with appropriate version in artifact repo such as Nexus or Artifactory. This will ensure that we have proper traceability between versions deployed in production vs code changes. It will also help you to perform roll backs in production. While maven based built components can be archived as is, we suggest you to zip py code changes and configurations for convenience as you archive in the repo.

 

5. Deployment

Deployments of Insights can either be manual or automated and you can choose between them based on your policies, investments and priorities. Manual deployment would mean, administrator manually logging into the servers and installing or updating one or more Insights application components. He may also choose to view logs, make configuration changes as applicable to the json files.

For customer, who intend to perform automated deployments, you may choose to leverage any deployment tools used by the customer such as UCD, XLDeploy, Chef, Puppet etc. Insights core app components such as engine, platform services or the UI etc that has undergone customer specific changes need to be built and then deployed using deployment automation tools appropriately on to the required servers. You may follow the detailed manual installation steps documented before automating them. Other Insights app components such as agents, configs and dashboard jsons shall be deployed as is on to the required folders.

For docker based deployments, you may choose to build the changes inside the image directly and perform a docker pull on to the target environment.

Deployment is an extremely important and sophisticated stage in the pipeline as with other applications and ensure that your consider all technical aspects of Insights Application as well as your customer’s production environment checklist before your automate. For customer, that prohibit manual access to production servers, we suggest you to exercise complete analysis to understand on aspects on trouble shooting, debugging, access to logs, perf. tuning etc.

 

7. Validation

Ensure that your earmark a sanity suite for regression when you complete your deployments. In most cases, these can be manual, however, you are free to automate them using regression tools like Selenium.

 

8. Environment

We advise you to have a minimum of 2 environments to deploy insights. The development environment would be the setup to perform changes first and run validations while the production environment would run the application of its intended end users. Ensure that both the environments are similar in tech stack and configuration as a digital twin. It will help in identifying issues, perf. tuning and faster resolution.

 

9. Change Management

We strongly suggest you to consider managing and tracking production deployments of Insights using your existing ITSM ticketing process. It will help maintain strict control on the changes that are performed on the production Insights deployments. It will improve traceability as well as help in roll backs for restoring versions. Ideally, all changes to production should be tracked by an ITSM ticket for traceability.

 

10. Performance Monitoring

Optionally, you may leverage your application monitoring tool, log sinks and usability tracking tools as you may do for any business application. These can help your proactively monitor Insights and bisect perf related information as required for fine tuning.

 

 11. Backup and Restore

It is extremely, important to subscribe the Insights servers to your enterprise backup and restore services. In most cases, your existing production environments would be governed by these policies. From Insights perspective, it is vital to include the data stored in neo4j and PostgreSQL as part of the backup strategy. Optionally, you may choose to take periodic snapshots of the complete server or container as needed.

 

12. Other Requirements

Do consider other requirements that are defined as part of your customer’s policies that govern deployments of application in production environment such as authentication (LDAP/AD/SAML), logging requirements, load balancing, software version dependencies (OS, patches, middleware, java, py, etc), Common Shared Services such as RabbitMQ, PostgreSQL, network and port dependencies, access privileges on the servers, SSL/TLS, etc. Each of these may have an impact on the functioning of Insights application and its middleware components. You may have to perform a complete impact analysis before embarking on the DevOps approach.

 

Conclusion

The intention of the DevOps process defined in this document is to act as a guide rail for teams that intend to deploy and scale Insights across one or more line of business. You may choose to implement one or more recommendations prescribed in the above DevOps process based on your customer landscape and requirements. Do consider your customer’s requirements around change management, delivery process, security and governance before implementing the pipeline. Also do take into account the factors around effort, investments and priorities before setting up a DevOps pipeline and automation.

©2021 Cognizant, all rights reserved. US Patent 10,410,152