OnBots Installation Guide
Want to know about OneDevOps OnBot Components?
Learn about components to setup local development environment here.
OnBots Components and Architecture
Basic Component's in OnBot Framework
Bot Components
Kubernetes Environment-Creates base environment required to execute and maintain Bots. Kubernetes spawn docker image with base Ubuntu 16.04 with additional software's like NodeJS,CoffeeScript,npm e.tc.
Elastic Search-Holds logs,metrics related information
MongoDB - Used to maintain Approval flow. Generates and Maintain Ticket related information.
MiddleWare Application - Act as a interceptor to read admin response and manages communication between various
Bots.
Want to learn more about OnBot Framework see Bot Sequence Diagram
This section will help you to setup Environment for OnBot. Browse various categories provided below -Hardware Configuration
Kubernetes Master & Node configuration
No. of instances 4 (1 – Master & Bot Framework; 3- Nodes) AWS Instance type m3.xlarge
OS Ubuntu 16.04 Xenial (64bit) Storage 200 GB
Mem (GiB) 15GB
vCPU 4
Software configuration
Kubernetes version: 1.7.3
Docker version Required: 1.12.6
Npm Version: 3.5.2
Node Version: 4.2.6
Elasticsearch Version: 5.4.1
Network Ports
Kubernetes Master 6443
Kubernetes Slave 6443
Bots Web App http/https
Middleware App( For Approval Flow) http/https, 3000
Elastic Search 9200
MongoDB 27017
All ports must be opened to each other for both private and public ip.
Installation of Kubernetes Cluster
Connect to AWS EC2 instance from Windows using PuTTY
{*}http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html*
Execute the following commands in the Master and Slave machines
1. Switch to the root user(sudo su or sudo -s)
2. apt-get update && apt-get install -y apt-transport-https
3. curl -shttps://packages.cloud.google.com/apt/doc/apt-key.gpg
4. cat <<EOF >/etc/apt/sources.list.d/kubernetes.list debhttp://apt.kubernetes.io/ kubernetes-xenial main EOF
5. apt-get update
6. apt-get install docker.io
7. apt-get install -y kubelet kubeadm kubectl kubernetes-cni
8. Run docker as daemon in both Master and Slave nodes as given below:
/libO/pseyn{*}stemd/system/docker.service*Add the below line and save the file:
ExecStart=/usr/bin/dockerd Hufndi:x//://H/var/run/docker.sock -H tcp://0.0.0.0:2376 $DOCKER_OPTS
Execute the following commands to reload and restart docker
systemctl daemon-reload systemctl restart docker OnBots Installation Guide
Want to know about OneDevOps OnBot Components?
Learn about components to setup local development environment here.
OnBots Components and Architecture
Basic Components of the OnBots Framework:
Kubernetes Environment-Creates base environment required to execute and maintain Bots. Kubernetes spawn docker image with base Ubuntu 16.04 with additional software's like NodeJS,CoffeeScript,npm e.tc.
Elastic Search-Holds logs,metrics related information
MongoDB - Used to maintain Approval flow. Generates and Maintain Ticket related information.
MiddleWare Application - Act as a interceptor to read admin response and manages communication between various
Bots.
This section will help you to setup Environment for OnBot. Browse various categories provided below:
Expand | ||
---|---|---|
| ||
Kubernetes Master & Node configuration |
Expand | ||
---|---|---|
| ||
Kubernetes Master 6443
|
Expand | ||
---|---|---|
| ||
Step 1: Installing kubelet and kubeadm on your hosts(to be followed in both Master and Slave)
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
Step 2: Initializing the master (Below Commands applicable only for master)To initialize the master, pick one of the machines you previously installed kubeadm on, and run: 1. # kubeadm init sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config
In case, kubernetes configuration located in $HOME/admin.conf then run the following commands: |
...
as root user or prefix with sudo: |
...
chown $(id -u):$(id -g) $HOME/admin. |
...
conf export KUBECONFIG=$HOME/ |
...
admin.conf
|
...
|
...
|
...
|
...
|
...
commands - # kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
3. Create a cluster role and bind all services then allow anonymous user to access Kube API
Execute the following commands in the Slave machines to connect to the Master machine: # kubeadm join --token <token id> <Primary IP>:6443 Above token helps slave to join master node. Token should run in all slaves once step 3 completes. Master Isolation: By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. a single-machine Kubernetes cluster for development, run:
This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere Step 3: Installing a pod network(Below Commands applicable only for master)You must install a pod network add-on so that your pods can communicate with each other. go to addon hyperlink and get weave.net kubectl apply -f https://git.io/weave-kube-1.6 Step 4: Adding Slaves to Master node(Below Commands applicable only for slaves)Run the obtained token given below from Step 2 in all slave nodes not needed to run in add-on server: kubeadm join --token <token id> <Primary IP>:6443 |
...
|
...
#kubectl get pods --all-namespaces |
You will get an output similar to the one shown below -
...
MongoDB Setup
Run the following commands:
1) sudo apt-get update
2) sudo apt-get install -y mongodb
3) modify the following lines in the file: /etc/mongodb.conf
# Where to store the data. dbpath=/var/lib/mongodb
#where to log logpath=/var/log/mongodb/mongodb.log
logappend=true
bind_ip = <public_IP_of_your_machine>
port = 27017
# Enable journaling,[ http://www.mongodb.org/display/DOCS/Journaling journal=true|http://www.mongodb.org/display/DOCS/Journaling]
Here, assign bind_ip with public IP of your machine. Specify the dbpath and logpath according to your choice.
4) Make the mongodb user as mongodb. Usually the user will be mongodb by default. You may check it and skip this step if not required:
chown -R mongodb:mongodb /var/lib/mongodb
5) Start mongodb as a service:
sudo service mongodb start
You can check the status after starting mongodb by this command: sudo service mongodb status
6) Export your existing collections from your pre-existing mongodb service (if any):
sudo mongoexport -db <db_name> -c <collection_name> --out <filename>.json
[eg sudo mongoexport --db botstore -c BotCategory --out BotCat_bkp.json]
This will export all your collection data into a json file inside the present working directory. Now copy that json file into the machine where you have newly installed mongodb and execute the following command:
...
sudo mongoimport --db <db_name> --collection <collection_name> < /path/to/exported_json
[eg - sudo mongoimport --db botstore --collection BotCategory < /home/487398/BotCat_bkp.json]
This will import all you data into the new mongodb installation.
7) Restart the mongodb service:
sudo service mongodb restart
...
ElasticSearch Setup
Run the below commands to install Java 8 in your Ubuntu machine:
1) sudo apt-get update
2) sudo apt-get install openjdk-8-jre
The binary packages of Elasticsearch have only one dependency: Java. The minimum supported version is Java 8
Now run the below commands to download and install elasticsearch (steps shown for elasticsearch version-5.4.1):
1) curl -L -O{*}https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.1.deb*
2) sudo dpkg -i elasticsearch-5.4.1.deb
Go inside /etc/elasticsearch folder and open elasticsearch.yml file. Next, edit the following lines in it:
path.data: /home/487398/elasticsearch-5.4.1/data path.logs: /home/487398/elasticsearch-5.4.1/logs network.host: 0.0.0.0
http.port: 9200
data will specify storage location for elasticsearch indices logs will specify storage location of elasticsearch logs
host should be set to 0.0.0.0 for remotely connecting with the installed elasticsearch port will specify on which port elasticsearch is running
Start the elasticsearch by giving the following:
1) sudo /etc/init.d/elasticsearch start
Verify whether elasticsearch is running by the following command:
curl {*}http://localhost:9200*
You should get the following JSON body as response:
{
"name" : "_mykWLH",
"cluster_name" : "elasticsearch", "cluster_uuid" : "vwEaY6m1TJ6HpKuduu9MGQ", "version" : {
"number" : "5.4.1", "build_hash" : "2cfe0df",
"build_date" : "2017-05-29T16:05:51.443Z", "build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}Bots Framework Setup
Install npm and nodejs by executing the following commands:
1. apt-get install npm
2. apt-get install nodejs
Clone webapplication from gihub
1. git clone of chatops respository (git clone –b <branchname> <cloning link>)
2. Copy $HOME/admin.conf file generated by installing kuberentes inside Bots folder of webapp
3. Update Configurations in Bots\app\config\config.json
Configuration settings
Map your configuration in config.json file from app/config directory: ElasticSearch : elasticsearch url
Kubernetes_End_Ppoint : Kubernetes url
MongoDB : mongodb Host
MONITOR_INTERVAL : Time Interval for monitoring bot
MONITOR_RETENSION : mapped to 1 (stores a jsonobj in elasticsearch which has all hubot metrics of a current second)
MONGO_DB_URL : mongodb database name for approval process of bot actions
MONGO_COLL : mongodb collection name for approval process of bot actions
MONGO_COUNTER : mongodb collection name for storing the number of next ticket to be generated
MONGO_TICKETIDGEN : stores the Id of the collection referred by MONGO_COUNTER
Run Application
1. Install npm modules by running the following from Bots folder
npm install
2. Run application
nodejs app
This will run application in specified port
Middleware Application SetUp
1. Clone slack-app from repository
2. Run npm install inside slackapp folder from the cloned directory
3. Run the application with the command nodejs app. (Application will run in 3000 port)
The application has to be reachable from internet. Slack will post data to this application for approval flow