Frequently Asked Questions
- Akshay Sharma (Unlicensed)
- Abirami
- Hari (Unlicensed)
- Jeyanthi (Unlicensed)
This section covers the Frequently Asked Questions regarding various functionalities/troubleshooting steps of Insights.
Python Agents residing within Insights collects data on regular intervals from various DevOps tools that are used at the project level implementation and publishes it into MQ which is consumed by Java Engine and pushes data into the Graph database which is then presented in various graphical representation for the end users to easily visualize the application productivity in different aspects.
- Insights needs 2 separate server for setup with sudo access
- Open the Necessary Ports
- DevOps tools which are planned to be added to Insights need to have REST API. Please refer to the Agents section for the supported DevOps Tools
- Service ID for all Tools which will have read-only access to all Projects, Repositories, Build Jobs appropriate for the different Tools for which data to be collected
Basic knowledge/understanding on following will be useful before implantation of Insights
- Rest APIs on DevOps tools that are used
- Basic Cypher queries – Neo4j (in order to develop additional dashboards)
- Basic / Functional knowledge of Python code (to debug agents in case of any issues)
Please find below the list of ports that need to be opened for seamless functioning of Insights :
- Grafana - 3000
- Neo4j - 7474
- Tomcat - 8080
- ElasticSearch - 9200
- MQ port - 15672
This is a very critical file which has configurations to integrate with Grafana, Neo4j, ElasticSearch, LDAP configuration, Postgresql, Insights Spark (if configured) and configuration for Native Users.
This is the file which is being used for incremental data fetch from Devops Tools. Agent updates this file with the details at the end of completion of execution every time scheduler triggers the Agent.
Agent forwards the data to message queue in the appropriate Agent queues. Platform engine consumes the data from appropriate queues and creates nodes in the Neo4j in the respective labels.
Grafana end point will be the either Elastic IP or DNS IP of the server in which grafana is hosted.
Set enableNativeUsers : true in server-config.json
It is the primary DB for Grafana. Additionally, it also stores information about the different agent Queues.
- User Authentication information is stored here which will be consumed by PlatformService during authentication
- Engine Jar uses the queue related information to create Queues in Rabbit MQ and respective labels in Neo4j
Graphaware plugin residing in neo4j has the configurations which takes care of syncing data between ElasticSearch and Neo4j.
Login to Grafana. Navigate to Server Admin – Add New User. Share the invite by email or once the invite URL gets created, create dummy credentials and share it in a private email to the respective users.
Implementer needs to identify the integrations defined in the DevOps pipeline. Eg: What data flows into GIT to identify specific code changes for a JIRA ISSUE.
Try to get the values through print statements as applicable. E.g.: print variable_name
Stop Tomcat. Navigate to /tomcat/webapps. Delete PlatformService.war, PlatformService and app folders. Copy the latest PlatformService.war and unzip app.zip in the same location. Restart Tomcat
After tomcat restart, navigate to the /tomcat/webapps location. PlatformService folder should be created successfully. Access Insights UI and authentication should be successful.
Insights UI log will be available in the location $INSIGHTS_HOME/logs/PlatformService/log.out
Tomcat logs can be found in the location /tomcat/logs/
Windows: /Server2/Configs/EngineJar/PlatformEngine.jar
Linux: /opt/insightsengine/PlatformEngine.jar
Windows: /Server2/Configs/EngineJar/log4j.log
Linux: $INSIGHTS_HOME/logs/PlatformEngine/log4j.log
LDAP configuration to be done in 2 files
- GRAFANA_HOME/conf/ldap.toml
- Key configs to be made:
bind_dn
bind_password
port = 389 (default)
host
search_base_dns
group_dn
Uncomment [server.attributes] section
- $INSIGHTS_HOME/.Insights/server-config.json
- Key configs to be made to ldapConfiguration section:
ldapUrl
bindDN
bindPassword
searchBaseDN
searchFilter
Stop the Agents and Engine. Execute the following command.
curl -XDELETE -u iSight:iSight http://#URL#:15672/mq/api/queues/%2f%/#Queue_name#
Start both the components post deletion
Stop the Agents and Engine. Execute the following command.
curl -XDELETE -u iSight:iSight http://#URL#:15672/mq/api/queues/%2f%/#Queue_name#/contents
Start both the components post deletion
- Stop Grafana
- Get latest grafana version from platform docroot.
- Follow the installation instructions
- Copy the conf files: ldap.toml, defaults.ini
- Copy the associated plugins and datasource to the respective path in data folder
- Make a copy of db folder from the path neo4j_home/data/databases/ folder.
- Get the required neo4j version and replace the graph.db folder in data/databases with the backup folder.
- Then start neo4j as service
Take backup of current Jar. Copy the Jar from the required release (preferably latest) and place it in the appropriate locations based on the operating system. Start the Jar.
Windows: /Server2/Configs/EngineJar/PlatformEngine.jar
Linux: /opt/insightsengine/PlatformEngine.jar
- Take backup of current PlatformService.war and app folder.
- Stop the tomcat.
- Get the relevant UI zip file from Browse URL: https://github.com/CognizantOneDevOps/Insights/releases and replace the app folder of tomcat with the recent file and start the tomcat
- Stop Agent from UI
- Navigate to the service where Agent code is hosted
- Stop service in the command line
- Make required changes in config.json
- Delete tracking.json
- Clear the data nodes and associated relationships from neo4j to avoid duplicates
- Start the service in command line
- Check if indexes are applied to the key attributes in each of the labels.
- Use optimized query.
Click here to view the section.
Stop all the following services
- RabbitMq
- All the Agents
1)Open a issue in Jira GUI which is a part of sprint http://<JIRAHOST.com>/rest/api/2/issue/<issueID>
2)
There could be 3 possible configuration that needs to be checked
- Telnet the public IP with port grafana port number(3000) Eg: telnet 123.122.12.12 3000 if is connected then it is good to proceed,if not check with server admin for opening port or a network restart should help.
- Check the grafana endpoint in server-config.json if it is present with the public IP ,if not go ahead and change it.
- Check apache tomcat folder/webapps/app/config/uiconfig.json file, open the file and there will be reference to two urls if it is localhost change it to Public IP.
- Check artifact name inside apache tomcat folder/webapps/PlatformService.war . war file should not have version name appended in it.
Don't give JIRA password directly, in the configuration.
Generate a token in https://id.atlassian.com/manage/api-tokens# and use it in place of token.
Check for the logs of RabbitMq if Reboot is the issue then the logs will look like below
"BOOT FAILED =========== Error description: {could_not_start,rabbit, {bad_return,
"
In the above case the reason for the failure is that the when server is rebooted it end up in the bad queue db
(for whatever reason -i.e. a sudden power failure, some other process touches the files)
which rabbitmq can't parse and so it crashes. Once you clear the queue of messages, it works fine.
Go to the mnesia directories and delete the queues
and msg_store_transient directories.
Make sure to take backup.
mnesia folder location :/var/lib/rabbitmq/
.
Since the db, create the respective queue for the tools using the agent ID in the config.json for the respective agents
before starting the agents.
©2021 Cognizant, all rights reserved. US Patent 10,410,152