-
kubernetes - kube-prometheus
연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ kube-prometheus kubernetes 모니터링을 해보자. 참고 : https://github.com/coreos/kube-prometheus#quickstart install cd ~/Desktop git clone https://github.com/coreos/kube-prometheus.git cd ~/Desktop/kube-prometheus # Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources kubectl create -f manifests/setup kubectl create -f manifests/ # 확인 until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done ## 삭제 kubectl delete --ignore-not-found=true -f...
-
kubernetes helm
연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ helm helm3부터는 tilder를 설치안해도 되서 좋다. 설치 macos에서 설치하면 brew install helm helm version 패키지 찾기 helm search hub # 여러 저장소들에 있는 헬름 차트들을 포괄하는 헬름 허브를 검색한다. helm search hub cert-manager helm search repo # helm repo add를 사용하여 로컬 헬름 클라이언트에 추가된 저장소들을 검색한다. 검색은 로컬 데이터 상에서 이루어지며, 퍼블릭 네트워크 접속이 필요하지 않다. helm search repo cert-manager repo helm...
-
kubernetes MetalLB 와 Ingress Nginx
연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ kubernetes MetalLB 와 Ingress-Nginx MetalLB (베어메탈에서 사용하는 로드발란서) If you’re using kube-proxy in IPVS mode, since Kubernetes v1.14.2 you have to enable strict ARP mode. kubectl edit configmap -n kube-system kube-proxy apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" ipvs: strictARP: true #false를 true로 변경 또는 자동화 # 보기 kubectl get configmap kube-proxy -n kube-system -o yaml | \ sed -e "s/strictARP: false/strictARP: true/" |...
-
kubernetes cert-manager
kubernetes cert-manager 연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ helm을 사용하자. brew install helm cert-manager install helm repo add jetstack https://charts.jetstack.io kubectl config use-context c2 kubectl create namespace cert-manager kubectl config set-context --current --namespace cert-manager helm install jetstack/cert-manager --namespace cert-manager --generate-name --set installCRDs=true NAME: cert-manager-1601943552 LAST DEPLOYED: Sun Oct 4 22:05:04 2020 NAMESPACE: cert-manager STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: cert-manager has been deployed successfully! In order to...
-
kubespray - 04 Kubernetes Multi Cluster
kubespray - 04 multi cluster 관리 연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ 이제 kube spray로 2개 이상의 클러스터를 생성시 관리방법을 이야기해보자. download cluster 1 config 랩탑에 scp로 1번째 클러스터에 설정 파일을 가져온다. mkdir ~/.kube scp c1-master:/etc/kubernetes/admin.conf ~/.kube/c1-config 필요한 부분을 수정하자. apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F server: https://192.168.0.100:6443 name: kube-c1 # 수정-1 contexts: - context: cluster: kube-c1 # 수정-1 namespace: pickeatup-prod user: c1-admin # 수정-2 name: kubernetes-c1-admin@kubernetes #...
-
kubespray - 03 install kube with haproxy
kubespray - 03 install kube - haproxy 버전 연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ 2개의 서버로 haproxy를 설정하여 하나가 문제가 생겨도 서비스에는 지장이 없게 한다. 쿠버네티스가 haproxy를 보고 있게 한다. 맨 마지막 다이어그램 참고 vm 셋업 vi Vagrantfile Vagrant.configure("2") do |config| config.vm.provision "shell", path: "provision.sh" # master config.vm.define "minion1" do |minion1| minion1.vm.box = "centos/7" minion1.vm.hostname = "minion1" minion1.vm.network "private_network", ip: "192.168.33.21" minion1.vm.provider "virtualbox" do |v| v.memory =1501 v.cpus =...
-
haproxy keepalived
Haproxy and KeepAlived 구성 2개의 서버를 준비한다. 새노드 설치 ip 정보 node name ip memo haproxy01 192.168.33.2 keepalive 192.168.33.10 haproxy02 192.168.33.3 keepalive 192.168.33.10 keepalived install 2개의 서버에 모두 같게 설치한다. yum install -y keepalived config vi /etc/keepalived/keepalived.conf master global_defs { notification_email { brian@xgridcolo.com } notification_email_from brian@xgridcolo.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_haproxy { script "killall -0 haproxy" # check the haproxy process interval 2 # every 2 seconds weight 2 # add...
-
kubespray - 02 install kube - local internal loadbalancer
연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ kubespray - 02 install kube - local internal loadbalancer install python 3 python 3를 꼭 사용해야한다. 맥에 기본적으로 2.7이 설치되잇어서 상당히 헷갈릴수 있다. https://www.python.org/downloads/mac-osx/ 위 사이트에 가서 설치하자. 이제 pip3설치 cd curl -O https://bootstrap.pypa.io/get-pip.py sudo python3 get-pip.py kubespray git clone https://github.com/kubernetes-sigs/kubespray #git clone https://github.com/kubernetes-sigs/kubespray.git git clone --depth 1 --branch v2.14.1 https://github.com/kubernetes-sigs/kubespray.git cd kubespray python -V && pip -V > Python 3.8.6 > pip...
-
kubespray - 01 vagrant & virtual box
연속된 글입니다. https://teamsmiley.github.io/2020/09/30/kubespray-01-vagrant/ https://teamsmiley.github.io/2020/10/01/kubespray-02-install-kube-local-internal-loadbalancer/ https://teamsmiley.github.io/2020/10/02/kubespray-03-kube-with-haproxy/ https://teamsmiley.github.io/2020/10/04/kubernetes-multi-cluster/ https://teamsmiley.github.io/2020/10/05/kubernetes-cert-manager/ https://teamsmiley.github.io/2020/10/06/kubernetes-metallb-ingress-nginx/ https://teamsmiley.github.io/2020/10/06/kubernetes-helm/ https://teamsmiley.github.io/2020/10/08/kubernetes-prometheus-grafana/ https://teamsmiley.github.io/2020/10/08/kubernetes-log/ https://teamsmiley.github.io/2020/10/10/kubernetes-backup-velero/ kubespray - 01 vagrant & virtual box 설치 vagrant가 잇어야 로컬에서 테스트가 잘 될듯 virtual box (꼭 version 6.0을 사용) && vagrant downinstall https://www.virtualbox.org/wiki/Downloads https://www.vagrantup.com/downloads.html 둘다 설치한다. virtualbox 6.1 은 macos에서 잘 동작하지 않음… vagrantfile 생성 mkdir kubespray vagrant init centos/7 --minimal cd kubespray vi Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" end vagrant up 생성된 vm에 접속 vagrant ssh vagrant 유저와 비번...
-
kubernete remove node
kubernetes에서 필요 없는 노드 삭제 더이상 pod가 할당 되지 않게 kubectl cordon master03 포드가 노드에 정상적으로 스케쥴링될 수 있게 하기 위해서는 uncordon 기존 포드들을 다른곳으로 이동 kubectl drain master03 node/master03 cordoned error: unable to drain node "master03", aborting command... There are pending nodes to be drained: master03 error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-proxy-v7jz8, kube-system/weave-net-f52sc, metallb-system/speaker-gzq42 에러가 난다. daemonset을 못지운다는 에러에 나온 옵션 추가 kubectl drain master03 --ignore-daemonsets 잘 된다. 노드리스트에서 제거...