— programming — 2 min read
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
This article covers use of KOPS to setup a Kubernetes cluster and is a follow up of kops/aws.md.
Follow the instruction at kops/install.md to install KOPS cli.
All references to example.com or example.in or DOMAINNAME.TLD or whatever, needs to be replaced with relevant domain, subdomain or tld. Like example.com or nginx.example.com
In addition, one needs access to the following.
You need aws cli installed and set up key secret of a user which has admin access. This makes things easy, or at least have the following access.
AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess
1export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)2export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
Configuring DNS is optional. These steps in reference to user having access to Route53 and meddling around with root domain.
If you have have a valid Domain following command should give relevant response.
1dig ns subdomain.example.com
1aws s3api create-bucket \2 --bucket prefix-example-com-state-store \3 --region us-west-2a
change us-east-1
to whatever region you want to associate it to. I believe it is irrelevant in case of S3.
1export NAME=kluster.example.com2export KOPS_STATE_STORE=s3://prefix-example-com-state-store3
4kops create cluster \5 --zones us-west-2a \6 ${NAME}7kops update cluster ${NAME} --yes
At this point if everything went through without any errors. You should have a working kubernetes cluster.
To verify
1kubectl get nodes2kops validate cluster3kubectl -n kube-system get po
Above should return some valid stuff. To undo whatever has been done so far.
1kops delete cluster --name ${NAME} --yes
Sometimes delete cluster may not work, especially if you have muddled around with some settings on AWS. For eg. launched an EC2 instance or made some change to Route53 or made some change to security group. You can knock of the relevant entry that kops complain about and try the command again.
Moving on to deploy something useful. Lets deploy nginx and expose it as nginx.example.com
To do this, we need to install external DNS. Make a copy of yaml file to your local and put it in subfolder deployment.
Refer to yaml files addressed here on this gist.
1kubectl apply -f deployments/external-dns.yaml
Verify that there are no error in the logs. If you get any error, you may have to manually grant Route53 full access to security group, something like nodes.example.com
or master.example.com
check iam
section, it would have created a couple of new iam group/user/policies.
1kubectl logs -f $(kubectl get po -l app=external-dns -o name)
To deploy nginx on kubernetes
1kubectl create -f deployments/nginx.yaml
After sometime, (may be a up to 5 minutes) you should be able to open nginx.example.com in browser.
1kubectl get nodes --show-labels2kubectl config view3kubectl get deployments 4kubectl get svc5kubectl get ing
Get admin passsword from
kubectl config view
1kubectl create -f deployments/kubernetes-dashboard.yaml2kubectl apply -f deployments/kube-dashboard-access.yaml
Access UI at something like https://api.kluster.example.com
Need to grant Route53 permissions to IAM role
nodes.kluster.example.com
something like that. Do this from AWS console. Something like note that region depends on what region cluster is deployed.
1kubectl apply -f deployments/external-dns.yaml2kubectl logs -f $(kubectl get po -l app=external-dns -o name)
1kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=DOCKER_USERNAME --docker-password=DOCKER_PASSWORD [email protected]2kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 -D
1kubectl create secret generic papertrail-destination --from-literal=papertrail-destination=syslog://logs2.papertrailapp.com:YOUR_PORT2kubectl create -f https://help.papertrailapp.com/assets/files/papertrail-logspout-daemonset.yml
Below are some of the notes I made while getting my hands dirty with Kubernetes.
1# Optional2ID=$(uuidgen) && aws route53 create-hosted-zone --name k10s.example.in --caller-reference $ID | \3 jq .DelegationSet.NameServers4
5# Optional6aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.in.") | .Id'7
8# Optional9aws route53 change-resource-record-sets \10 --hosted-zone-id YOUR_HOSTED_ZONE_ID \11 --change-batch file://subdomain.json12
13# Optional14dig ns k10s.example.in15
16# Mandatory17aws s3api create-bucket \18 --bucket k10s-example-in-state-store \19 --region us-east-120
21# Mandatory22export NAME=kluster.example.in23export KOPS_STATE_STORE=s3://k10s-example-in-state-store24
25kops create cluster \26 --zones us-west-2a \27 ${NAME}28
29# Optional30kops edit cluster ${NAME} # For editing configs31
32kops update cluster ${NAME} --yes
Below is output Suggestions:
- validate cluster: kops validate cluster
- list nodes: kubectl get nodes --show-labels
- ssh to the master: ssh -i ~/.ssh/id_rsa [email protected]
- the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
- read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.
Web UI (Dashboard) - Kubernetes
1kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml2kubectl proxy3kubectl delete -f deployments/kube-dashboard-access.yaml
kube-dashboard-access.yaml
1apiVersion: rbac.authorization.k8s.io/v1beta12kind: ClusterRoleBinding3metadata:4 name: kubernetes-dashboard5 labels:6 k8s-app: kubernetes-dashboard7roleRef:8 apiGroup: rbac.authorization.k8s.io9 kind: ClusterRole10 name: cluster-admin11subjects:12- kind: ServiceAccount13 name: kubernetes-dashboard14 namespace: kube-system
1kops validate cluster2kubectl get nodes3kubectl -n kube-system get po4kubectl cluster-info5
6kubectl run --image=nginx nginx-app --port=807
8kubectl expose deployment nginx-app --port=80 --name=nginx-http
1kubectl version2kubectl get nodes3kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=80804kubectl get deployments5kubectl proxy # Another terminal6curl http://localhost:8001/version7echo $POD_NAME 8curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/9kubectl logs $POD_NAME10kubectl exec $POD_NAME env11kubectl exec -ti $POD_NAME bash12cat server.js
1kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=k10s.example.in"2kubectl cluster-info3kubectl get nodes4kubectl get deployment5kubectl get pods6kubectl expose deployment nginx-app --type=LoadBalancer7kubectl get services8kubectl run docker-node-express --replicas=2 --labels="run=load-balancer-example" --image=ch4nd4n/docker-node-express --port=30009kubectl get deployments10kubectl get deployments docker-node-express 11kubectl describe deployments docker-node-express12kubectl expose deployment docker-node-express --type=LoadBalancer --name=docker-node-express-service13kubectl get services docker-node-express-service14kubectl describe services docker-node-express-service15kubectl delete services docker-node-express-service16kubectl delete deployment docker-node-express
LoadBalancer Ingress:
will contain the address to look for
Updating a build using yaml file
1kubectl cluster-info2# kubectl set image deployments/docker-node-express docker-node-express=ch4nd4n/docker-node-express:1.0.03kubectl create -f deployments/docker-node-app.yaml4kubectl scale deployment/docker-node-express-deployment --replicas=35kubectl replace -f deployments/docker-node-app.yaml --force6kubectl expose deployment docker-node-express-deployment --type=LoadBalancer --name=docker-node-express-service7kubectl delete -f deployments/docker-node-app.yaml8kubectl delete service docker-node-express-service9
10kubectl logs -f $(kubectl get po -l app=external-dns -o name)