OnBots Installation Guide
Want to know about OneDevOps OnBot Components?
Learn about components to setup local development environment here.
OnBots Components and Architecture
Basic Component's in OnBot Framework
Bot Components
Kubernetes Environment-Creates base environment required to execute and maintain Bots. Kubernetes spawn docker image with base Ubuntu 16.04 with additional software's like NodeJS,CoffeeScript,npm e.tc.
Elastic Search-Holds logs,metrics related information
MongoDB - Used to maintain Approval flow. Generates and Maintain Ticket related information.
MiddleWare Application - Act as a interceptor to read admin response and manages communication between various
Bots.
Want to learn more about OnBot Framework see Bot Sequence Diagram
This section will help you to setup Environment for OnBot. Browse various categories provided below -Hardware Configuration
Kubernetes Master & Node configuration
No. of instances 4 (1 – Master & Bot Framework; 3- Nodes) AWS Instance type m3.xlarge
OS Ubuntu 16.04 Xenial (64bit) Storage 200 GB
Mem (GiB) 15GB
vCPU 4
Software configuration
Kubernetes version: 1.7.3
Docker version Required: 1.12.6
Npm Version: 3.5.2
Node Version: 4.2.6
Elasticsearch Version: 5.4.1
Network Ports
Kubernetes Master 6443
Kubernetes Slave 6443
Bots Web App http/https
Middleware App( For Approval Flow) http/https, 3000
Elastic Search 9200
MongoDB 27017
All ports must be opened to each other for both private and public ip.
Installation of Kubernetes Cluster
Connect to AWS EC2 instance from Windows using PuTTY
{*}http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html*
Execute the following commands in the Master and Slave machines
1. Switch to the root user(sudo su or sudo -s)
2. apt-get update && apt-get install -y apt-transport-https
3. curl -shttps://packages.cloud.google.com/apt/doc/apt-key.gpg
4. cat <<EOF >/etc/apt/sources.list.d/kubernetes.list debhttp://apt.kubernetes.io/ kubernetes-xenial main EOF
5. apt-get update
6. apt-get install docker.io
7. apt-get install -y kubelet kubeadm kubectl kubernetes-cni
8. Run docker as daemon in both Master and Slave nodes as given below:
/libO/pseyn{*}stemd/system/docker.service*Add the below line and save the file:
ExecStart=/usr/bin/dockerd Hufndi:x//://H/var/run/docker.sock -H tcp://0.0.0.0:2376 $DOCKER_OPTS
Execute the following commands to reload and restart docker
systemctl daemon-reload systemctl restart docker
apt-get install for kubernete utilities(kubeadm,kubectl,kubelet,kubernetes-cni) without specifying version will take the latest version. To Install a specific version refer the following example commands:
apt-get install -y kubeadm=1.6.4-00 kubectl=1.6.4-00 kubelet=1.6.4-00 kubernetes-cni apt-get install -y kubeadm=1.7.3-01 kubectl=1.7.3-01 kubelet=1.7.3-01 kubernetes-cni
Check available Versions:
curl -sL https://apt.kubernetes.io/dists/kubernetes-xenial/main/binary-amd64/Packages
Execute the following commands in the Master Machine1. # kubeadm init
In case, kubernetes cluster configuration located in $HOME/.kube/config then run the following command
To start using your cluster, run below commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
Add "export KUBECONFIG=$HOME/.kube/config" to ~/.bashrc file in order to export the conf file. If this is
not added, the export will be applicable only to the current putty session. For a new session, the Kubernetes
API will be unreachable.
In case, kubernetes configuration located in $HOME/admin.conf then run the following command
To start using your cluster, run below commands as root user or prefix with sudo:
cp /etc/kubernetes/admin.conf $HOME/ chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
Add "export KUBECONFIG=$HOME/admin.conf" to ~/.bashrc file in order to export the conf file. If this is not added, the export will be applicable only to the current putty session. For a new session, the Kubernetes API will be unreachable.
2. In order to communicate on the network, run the following commands for the Master
-
- kubectl a{*}hpttpplsy:/-/fgit.io/weave-kube-1.6
3. To create Kubernetes dashboard, run the following commands - - kubectl cr{*}hettaptse:/-/fraw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/
kubernetes-dashboard.yaml
Dashboard will run as a Kube service either in the Master or in any one of the Slaves. Run "kubectl get svc--all-namespaces"
PORT.
to identify the PORT number of the dashboard. Access the dashboard throughhttps://IP:
4. Create a cluster role and bind all services then allow anonymous user to access Kube API
If the below kubectl commands are not executed, you will get an error -U'ser "system:serviceaccount:default:default" cannot list pods in the namespace "default"' - kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin*
--group=system:serviceaccounts - kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=system:anonymous*
--namespace=default
(OR) Use below authentication token (preferred one)
curl #k `(kubectl config view | grep server | cut -f 2 -d ":" | tr -d " ")`/api/v1/namespaces/default/pods/web/log --header "Authorization: Bearer `(kubectl describe secret
$(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')`"
Execute the following commands in the Slave machines to connect to the
Master machine
1. After kubeadm init, you will obtain a token as shown below - - kubeadm join --token <token id> <Primary IP>:6443*!worddavaaa54a52fedabc61e13859a5e97a091d.png|height=32,width=32!Execute it in the slave machines.
1. Post execution, go to the Master machine and run"kubectl get nodes", and verify if the nodes have joined the cluster.
2. Verify if all pods are up and running.
#kubectl get pods --all-namespaces
You will get an output similar to the one shown below -MongoDB Setup
Run the following commands:
1) sudo apt-get update
2) sudo apt-get install -y mongodb
3) modify the following lines in the file: /etc/mongodb.conf
# Where to store the data. dbpath=/var/lib/mongodb
#where to log logpath=/var/log/mongodb/mongodb.log
logappend=true
bind_ip = <public_IP_of_your_machine>
port = 27017
# Enable journaling,[ http://www.mongodb.org/display/DOCS/Journaling journal=true|http://www.mongodb.org/display/DOCS/Journaling]
Here, assign bind_ip with public IP of your machine. Specify the dbpath and logpath according to your choice.
4) Make the mongodb user as mongodb. Usually the user will be mongodb by default. You may check it and skip this step if not required:
chown -R mongodb:mongodb /var/lib/mongodb
5) Start mongodb as a service:
sudo service mongodb start
You can check the status after starting mongodb by this command: sudo service mongodb status
6) Export your existing collections from your pre-existing mongodb service (if any): sudo mongoexport -db <db_name> -c <collection_name> --out <filename>.json [eg sudo mongoexport --db botstore -c BotCategory --out BotCat_bkp.json]
This will export all your collection data into a json file inside the present working directory. Now copy that json file into the machine where you have newly installed mongodb and execute the following command:
sudo mongoimport --db <db_name> --collection <collection_name> < /path/to/exported_json
[eg - sudo mongoimport --db botstore --collection BotCategory < /home/487398/BotCat_bkp.json]
This will import all you data into the new mongodb installation.
7) Restart the mongodb service:
sudo service mongodb restartElasticSearch Setup
Run the below commands to install Java 8 in your Ubuntu machine:
1) sudo apt-get update
2) sudo apt-get install openjdk-8-jre
The binary packages of Elasticsearch have only one dependency: Java. The minimum supported version is Java 8
Now run the below commands to download and install elasticsearch (steps shown for elasticsearch version-5.4.1):
1) curl -L -O{*}https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.1.deb*
2) sudo dpkg -i elasticsearch-5.4.1.deb
Go inside /etc/elasticsearch folder and open elasticsearch.yml file. Next, edit the following lines in it:
path.data: /home/487398/elasticsearch-5.4.1/data path.logs: /home/487398/elasticsearch-5.4.1/logs network.host: 0.0.0.0
http.port: 9200
data will specify storage location for elasticsearch indices logs will specify storage location of elasticsearch logs
host should be set to 0.0.0.0 for remotely connecting with the installed elasticsearch port will specify on which port elasticsearch is running
Start the elasticsearch by giving the following:
1) sudo /etc/init.d/elasticsearch start
Verify whether elasticsearch is running by the following command:
curl {*}http://localhost:9200*
You should get the following JSON body as response:
{
"name" : "_mykWLH",
"cluster_name" : "elasticsearch", "cluster_uuid" : "vwEaY6m1TJ6HpKuduu9MGQ", "version" : {
"number" : "5.4.1", "build_hash" : "2cfe0df",
"build_date" : "2017-05-29T16:05:51.443Z", "build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}Bots Framework Setup
Install npm and nodejs by executing the following commands:
1. apt-get install npm
2. apt-get install nodejs
Clone webapplication from gihub
1. git clone of chatops respository (git clone –b <branchname> <cloning link>)
2. Copy $HOME/admin.conf file generated by installing kuberentes inside Bots folder of webapp
3. Update Configurations in Bots\app\config\config.json
Configuration settings
Map your configuration in config.json file from app/config directory: ElasticSearch : elasticsearch url
Kubernetes_End_Ppoint : Kubernetes url
MongoDB : mongodb Host
MONITOR_INTERVAL : Time Interval for monitoring bot
MONITOR_RETENSION : mapped to 1 (stores a jsonobj in elasticsearch which has all hubot metrics of a current second)
MONGO_DB_URL : mongodb database name for approval process of bot actions
MONGO_COLL : mongodb collection name for approval process of bot actions
MONGO_COUNTER : mongodb collection name for storing the number of next ticket to be generated
MONGO_TICKETIDGEN : stores the Id of the collection referred by MONGO_COUNTER
Run Application
1. Install npm modules by running the following from Bots folder
npm install
2. Run application
nodejs app
This will run application in specified port
Middleware Application SetUp
1. Clone slack-app from repository
2. Run npm install inside slackapp folder from the cloned directory
3. Run the application with the command nodejs app. (Application will run in 3000 port)
The application has to be reachable from internet. Slack will post data to this application for approval flow
- kubectl a{*}hpttpplsy:/-/fgit.io/weave-kube-1.6
Add Comment