SlideShare a Scribd company logo
Prometheus on NKS 가이드 문서
📌QA test Region on (KR / 한국)
https://github.com/sysnet4admin
Helm v3.10.3 설치
1.helm binary 설치 확인 (헬름 설치가 안되 있는 경우 설치를 우선 진행)
root@k8s-console:~# helm version
WARNING: Kubernetes configuration file is group-readable. This is
insecure. Location: /root/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is
insecure. Location: /root/.kube/config
version.BuildInfo{Version:"v3.10.3",
GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704",
GitTreeState:"clean", GoVersion:"go1.18.9"}
❗만약 insecure 메시지를 보고 싶지 않다면...
root@k8s-console:~# chmod 700 ~/.kube/config
root@k8s-console:~# helm version --short
v3.10.3+g835b733
헬름을 통한 Prometheus 배포를 위한 사전 작업
1.프로메테우스 설치를 위한 헬름 레포를 추가
root@k8s-console:~# helm repo add prometheus-community
https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
2.레포에서 최신 내용을 받아 업데이트
root@k8s-console:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "prometheus-community" chart
repository
Update Complete. ⎈Happy Helming!⎈
3.사전 구성된 스토리지클래스 확인
root@k8s-console:~# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nks-block-storage (default) blk.csi.ncloud.com Delete WaitForFirstConsumer true 17d
nks-nas-csi nas.csi.ncloud.com Delete WaitForFirstConsumer true 17d
Prometheus 배포
1.헬름을 통해서 NKS에 프로메테우스 배포
root@k8s-console:~# helm install prometheus
prometheus-community/prometheus 
--set server.service.type="LoadBalancer" 
--namespace=monitoring 
--create-namespace
WARNING: Kubernetes configuration file is group-readable. This is
insecure. Location: /root/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is
insecure. Location: /root/.kube/config
NAME: prometheus
LAST DEPLOYED: Sat Dec 17 17:03:41 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS
name from within your cluster:
prometheus-server.monitoring.svc.cluster.local
Get the Prometheus server URL by running these commands in the same
shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be
available.
You can watch the status of by running 'kubectl get svc
--namespace monitoring -w prometheus-server'
export SERVICE_IP=$(kubectl get svc --namespace monitoring
prometheus-server -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:80
The Prometheus alertmanager can be accessed via port on the following
DNS name from within your cluster:
prometheus-%!s(<nil>).monitoring.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace monitoring -l
"app=prometheus,component=" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace monitoring port-forward $POD_NAME 9093
########################################################################
#########
###### WARNING: Pod Security Policy has been disabled by default since
#####
###### it deprecated after k8s 1.25+. use
#####
###### (index .Values "prometheus-node-exporter" "rbac"
#####
###### . "pspEnabled") with (index .Values
#####
###### "prometheus-node-exporter" "rbac" "pspAnnotations")
#####
###### in case you still need it.
#####
########################################################################
#########
The Prometheus PushGateway can be accessed via port 9091 on the
following DNS name from within your cluster:
prometheus-prometheus-pushgateway.monitoring.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace monitoring -l
"app=prometheus-pushgateway,component=pushgateway" -o
jsonpath="{.items[0].metadata.name}")
kubectl --namespace monitoring port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
❗만약 storageclass를 nks-block-storage가 아닌 다른 스토리지를 쓰고 싶다면 다음을
참조하세요
helm install prometheus prometheus-community/prometheus 
--set alertmanager.persistentVolume.storageClass="nks-block-storage" 
--set server.persistentVolume.storageClass="nks-block-storage" 
--set server.service.type="LoadBalancer" 
--namespace=monitoring 
--create-namespace
2.배포된 pods와 services 확인
root@k8s-console:~# kubectl get po,svc -n monitoring
NAME READY STATUS RESTARTS AGE
pod/prometheus-alertmanager-0 1/1 Running 0 3m37s
pod/prometheus-kube-state-metrics-7cdcf7cc98-rsgcr 1/1 Running 0 3m37s
pod/prometheus-prometheus-node-exporter-5qpn4 1/1 Running 0 3m37s
pod/prometheus-prometheus-pushgateway-959d84d7f-8ztlm 1/1 Running 0 3m37s
pod/prometheus-server-54956c9cfb-wlvms 2/2 Running 0 3m37s
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/prometheus-alertmanager ClusterIP 198.19.133.139 <none>
9093/TCP 3m38s
service/prometheus-alertmanager-headless ClusterIP None <none>
9093/TCP 3m38s
service/prometheus-kube-state-metrics ClusterIP 198.19.185.119 <none>
8080/TCP 3m37s
service/prometheus-prometheus-node-exporter ClusterIP 198.19.252.64 <none>
9100/TCP 3m37s
service/prometheus-prometheus-pushgateway ClusterIP 198.19.193.200 <none>
9091/TCP 3m37s
service/prometheus-server LoadBalancer 198.19.178.17
monitoring-prometheus-se-18ca9-15174488-e4dd7137207d.kr.lb.naverncp.com 80:32534/TCP 3m38s
3.배포된 프로메테우스 확인
4.조회된 메트릭 데이터 확인
5.배포된 프로메테우스 조회 및 삭제
root@k8s-console:~# helm list -n monitoring
NAME NAMESPACE REVISION UPDATED
STATUS CHART APP VERSION
prometheus monitoring 1 2022-12-17 17:03:41.29034263
+0900 KST deployed prometheus-19.0.2 v2.40.5
root@k8s-console:~# helm uninstall prometheus -n monitoring
release "prometheus" uninstalled
6.삭제된 프로메테우스 리소스 확인
root@k8s-console:~# helm list -n monitoring
NAME NAMESPACE REVISION UPDATED STATUS CHART APP
VERSION
root@k8s-console:~#
root@k8s-console:~# kubectl get po,svc -n monitoring
No resources found in monitoring namespace.
Kube Prometheus Stack (이하 프로메테우스 스택) 배포
1.헬름을 통해서 NKS에 프로메테우스 스택 배포
root@k8s-console:~# helm install kube-prometheus-stack
prometheus-community/kube-prometheus-stack 
--set prometheus.service.type=LoadBalancer 
--set grafana.service.type=LoadBalancer 
--namespace=monitoring 
--create-namespace
NAME: kube-prometheus-stack
LAST DEPLOYED: Sat Dec 17 17:14:15 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l
"release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for
instructions on how to create & configure Alertmanager and Prometheus
instances using the Operator.
2.배포된 pods와 services 확인
root@k8s-console:~# kubectl get po,svc -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 1 (104s ago) 105s
pod/kube-prometheus-stack-grafana-77fd7cc8ff-57tp5 3/3 Running 0 114s
pod/kube-prometheus-stack-kube-state-metrics-579bf68b5-rj5ff 1/1 Running 0 114s
pod/kube-prometheus-stack-operator-64bc8bd9fd-2ggrs 1/1 Running 0 114s
pod/kube-prometheus-stack-prometheus-node-exporter-rv8b5 1/1 Running 0 115s
pod/prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 105s
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/alertmanager-operated ClusterIP None <none>
9093/TCP,9094/TCP,9094/UDP 105s
service/kube-prometheus-stack-alertmanager ClusterIP 198.19.250.205 <none>
9093/TCP 115s
service/kube-prometheus-stack-grafana LoadBalancer 198.19.171.157
monitoring-kube-promethe-4b1de-15174529-f0806941ff3d.kr.lb.naverncp.com 80:31512/TCP
115s
service/kube-prometheus-stack-kube-state-metrics ClusterIP 198.19.173.244 <none>
8080/TCP 115s
service/kube-prometheus-stack-operator ClusterIP 198.19.134.58 <none>
443/TCP 115s
service/kube-prometheus-stack-prometheus LoadBalancer 198.19.233.72
monitoring-kube-promethe-5d777-15174528-c0eedcb927a3.kr.lb.naverncp.com 9090:32176/TCP
115s
service/kube-prometheus-stack-prometheus-node-exporter ClusterIP 198.19.202.67 <none>
9100/TCP 115s
service/prometheus-operated ClusterIP None <none>
9090/TCP 105s
❗현재 프로메테우스 스택의 큰 문제점 ?
프로메테우스 배포에는 다음과 같이 default로 storageclass(nks-block-storage)를 통해서
pv와 pvc가 생성됩니다.
root@k8s-console:~# kubectl get pv -n monitoring
CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
pvc-0d5a8305acee499e8a0d57245a 10Gi RWO Delete Bound
monitoring/storage-prometheus-alertmanager-0 nks-block-storage 9m42s
pvc-6ae9e2442da2475295da9b1050 10Gi RWO Delete Bound
monitoring/prometheus-server nks-block-storage 9m44s
root@k8s-console:~# kubectl get pvc -n monitoring
NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
prometheus-server Bound pvc-6ae9e2442da2475295da9b1050 10Gi
RWO nks-block-storage 10m
storage-prometheus-alertmanager-0 Bound pvc-0d5a8305acee499e8a0d57245a 10Gi
RWO nks-block-storage 10m
그러나 프로메테우스 스택에서 storageclass를 지정해 주지 않으면 다음과 같이 pv,pvc를
이용하는 것이 아니라 emptyDir를 이용해서 임시로만 사용하도록 배포 됩니다.
root@k8s-console:~# kubectl get pv,pvc -n monitoring | grep
prometheus-server
root@k8s-console:~#
root@k8s-console:~# kubectl get po -n monitoring
prometheus-kube-prometheus-stack-prometheus-0 -o yaml | grep volumes
-A30
volumes:
- name: config
secret:
defaultMode: 420
secretName: prometheus-kube-prometheus-stack-prometheus
- name: tls-assets
projected:
defaultMode: 420
sources:
- secret:
name: prometheus-kube-prometheus-stack-prometheus-tls-assets-0
- emptyDir: {}
name: config-out
- configMap:
defaultMode: 420
name: prometheus-kube-prometheus-stack-prometheus-rulefiles-0
name: prometheus-kube-prometheus-stack-prometheus-rulefiles-0
- name: web-config
secret:
defaultMode: 420
secretName: prometheus-kube-prometheus-stack-prometheus-web-config
- emptyDir: {}
name: prometheus-kube-prometheus-stack-prometheus-db
- name: kube-api-access-g8rvd
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
<snipped>
따라서 현업 관점에서는 storageclass가 사용되도록 설정을 해줘야 하며, 이는
value.yaml을 통해서 추가 설정 배포 되어야 합니다. (또는 차트를 fork하고 새로 고쳐야함)
이는 다음의 링크를 참조하시기 바랍니다.
프로메테우스: https://github.com/prometheus-community/helm-charts/issues/186
그라파나: https://github.com/prometheus-community/helm-charts/issues/436
헬름value관련:
https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing
만약 정말하고 싶다면….부록1을 참고하세요
3.배포된 프로메테우스 확인
❗scapeInterval 시간을 배포 후에 변경하기를 원한다면
$ kubectl get prometheus -n monitoring -o yaml | nl | grep scrap
57 scrapeInterval: 30s
$ kubectl edit prometheus -n monitoring
prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus edited
$ kubectl get prometheus -n monitoring -o yaml | nl | grep scrap
57 scrapeInterval: 2m
4.배포된 그라파나 확인 및 로그인
ID: admin
Password: prom-operator
5.미리 설정된 데이터 소스가 프로메테우스인지 확인
6. 미리 만들어진 대시보드를 불러오기 위해 13770을 import 메뉴에
입력
7.Data Source를 프로메테우스로 선택하고 import 누름
8.import 된 13770을 감상 및 N/A와 No data 수정
9.(필요시) 배포된 프로메테우스 스택 조회 및 삭제
root@k8s-console:~# helm list -n monitoring
NAME NAMESPACE REVISION UPDATED
STATUS CHART APP VERSION
kube-prometheus-stack monitoring 1 2022-12-17 17:14:15.264607955
+0900 KST deployed kube-prometheus-stack-43.1.1 0.61.1
root@k8s-console:~# helm uninstall -n monitoring kube-prometheus-stack
release "kube-prometheus-stack" uninstalled
부록1
1.helm inspect로 values 파일 생성
$ helm inspect values prometheus-community/kube-prometheus-stack
--version 43.1.1 > kube-prometheus-stack-43.1.1.values
2. 생성된 values 파일에 필요 내용 추가 및 수정
라인 번호는 실행 시점 및 수정 순서에 따라 다소 차이가 있을 수도 있습니다.
참고로 라인 번호는 vi 실행 이후에 :set nu로 표시할 수 있습니다.
수정
542 ## Storage is the definition of how storage will be used by the
Alertmanager instances.
543 ## ref:
https://github.com/prometheus-operator/prometheus-operator/blob/main/Doc
umentation/user-guides/storage.md
544 ##
545 storage:
546 volumeClaimTemplate:
547 spec:
548 storageClassName: nks-block-storage
549 accessModes: ["ReadWriteOnce"]
550 resources:
551 requests:
552 storage: 50Gi
553 # selector: {}
추가
697 ## Using default values from
https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.y
aml
698 ##
699 grafana:
700 enabled: true
701 namespaceOverride: ""
702
703 # override configuration by hoon
704 persistence:
705 enabled: true
706 type: pvc
707 storageClassName: nks-block-storage
708 accessModes:
709 - ReadWriteOnce
710 size: 100Gi
711 finalizers:
712 - kubernetes.io/pvc-protection
수정
726 ## Timezone for the default dashboards
727 ## Other options are: browser or a specific timezone, i.e.
Europe/Luxembourg
728 ##
729 defaultDashboardsTimezone: utc
730
731 adminPassword: admin
732
수정
2580 ## Prometheus StorageSpec for persistent data
2581 ## ref:
https://github.com/prometheus-operator/prometheus-operator/blob/main/Doc
umentation/user-guides/storage.md
2582 ##
2583 storageSpec:
2584 ## Using PersistentVolumeClaim
2585 ##
2586 volumeClaimTemplate:
2587 spec:
2588 storageClassName: nks-block-storage
2589 accessModes: ["ReadWriteOnce"]
2590 resources:
2591 requests:
2592 storage: 50Gi
2593 # selector: {}
3.helm install 실행
root@k8s-console:~# helm install
prometheus-community/kube-prometheus-stack
--set prometheus.service.type=LoadBalancer 
--set grafana.service.type=LoadBalancer 
--create-namespace 
--namespace monitoring 
--generate-name 
--values kube-prometheus-stack-43.1.1.values
NAME: kube-prometheus-stack-1671267408
LAST DEPLOYED: Sat Dec 17 17:56:49 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l
"release=kube-prometheus-stack-1671267408"
Visit https://github.com/prometheus-operator/kube-prometheus for
instructions on how to create & configure Alertmanager and Prometheus
instances using the Operator.
4.변경된 값이 있는 values를 통해서 생성된 프로메테우스 스택 확인
root@k8s-console:~# kubectl get po,svc,pv,pvc -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-stack-1671-alertmanager-0 2/2 Running 1 (24s ago) 36s
pod/kube-prometheus-stack-1671-operator-696ddf996d-2tbft 1/1 Running 0 37s
pod/kube-prometheus-stack-1671267408-grafana-75cf5cff79-hrs59 3/3 Running 0 37s
pod/kube-prometheus-stack-1671267408-kube-state-metrics-7b44cdrf8q9 1/1 Running 0 37s
pod/kube-prometheus-stack-1671267408-prometheus-node-exporter-npmpk 1/1 Running 0 37s
pod/prometheus-kube-prometheus-stack-1671-prometheus-0 2/2 Running 0 35s
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/alertmanager-operated ClusterIP None <none>
9093/TCP,9094/TCP,9094/UDP 36s
service/kube-prometheus-stack-1671-alertmanager ClusterIP 198.19.141.183 <none>
9093/TCP 37s
service/kube-prometheus-stack-1671-operator ClusterIP 198.19.249.190 <none>
443/TCP 37s
service/kube-prometheus-stack-1671-prometheus LoadBalancer 198.19.189.46
monitoring-kube-promethe-94513-15174705-1fbb6ff1467d.kr.lb.naverncp.com 9090:30008/TCP 37s
service/kube-prometheus-stack-1671267408-grafana LoadBalancer 198.19.206.4 <pending>
80:31398/TCP 37s
service/kube-prometheus-stack-1671267408-kube-state-metrics ClusterIP 198.19.225.152 <none>
8080/TCP 37s
service/kube-prometheus-stack-1671267408-prometheus-node-exporter ClusterIP 198.19.191.119 <none>
9100/TCP 37s
service/prometheus-operated ClusterIP None <none>
9090/TCP 35s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
STORAGECLASS REASON AGE
persistentvolume/pvc-7c195a1da23d4755b21b6ed2db 50Gi RWO Delete Bound
monitoring/prometheus-kube-prometheus-stack-1671-prometheus-db-prometheus-kube-prometheus-stack-1671-prometheus-0
nks-block-storage 33s
persistentvolume/pvc-8c1c8c896efb40b6af8fe82a42 50Gi RWO Delete Bound
monitoring/alertmanager-kube-prometheus-stack-1671-alertmanager-db-alertmanager-kube-prometheus-stack-1671-alertma
nager-0 nks-block-storage 34s
persistentvolume/pvc-c4ba41508e4d4914a1f255f0ae 100Gi RWO Delete Bound
monitoring/kube-prometheus-stack-1671267408-grafana
nks-block-storage 36s
NAME
STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/alertmanager-kube-prometheus-stack-1671-alertmanager-db-alertmanager-kube-prometheus-stack-1
671-alertmanager-0 Bound pvc-8c1c8c896efb40b6af8fe82a42 50Gi RWO nks-block-storage 36s
persistentvolumeclaim/kube-prometheus-stack-1671267408-grafana
Bound pvc-c4ba41508e4d4914a1f255f0ae 100Gi RWO nks-block-storage 38s
persistentvolumeclaim/prometheus-kube-prometheus-stack-1671-prometheus-db-prometheus-kube-prometheus-stack-1671-pr
ometheus-0 Bound pvc-7c195a1da23d4755b21b6ed2db 50Gi RWO nks-block-storage 35s
레퍼런스:
https://1week.tistory.com/43
https://passwd.tistory.com/entry/Helm-kube-prometheus-stack-Grafana-Persistence-%ED%9
9%9C%EC%84%B1%ED%99%94
https://github.com/prometheus-community/helm-charts/issues/113

More Related Content

What's hot

Zookeeper 활용 nifi clustering
Zookeeper 활용 nifi clusteringZookeeper 활용 nifi clustering
Zookeeper 활용 nifi clustering
NoahKIM36
 
Openv switchの使い方とか
Openv switchの使い方とかOpenv switchの使い方とか
Openv switchの使い方とか
kotto_hihihi
 
Scale Kubernetes to support 50000 services
Scale Kubernetes to support 50000 servicesScale Kubernetes to support 50000 services
Scale Kubernetes to support 50000 services
LinuxCon ContainerCon CloudOpen China
 
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
OpenStack Korea Community
 
Keystone fernet token
Keystone fernet tokenKeystone fernet token
Keystone fernet token
Yuki Nishiwaki
 
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
Open Source Consulting
 
Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)
Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)
Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)
Seungmin Yu
 
Monitoring MySQL Replication lag with Prometheus & pt-heartbeat
Monitoring MySQL Replication lag with Prometheus & pt-heartbeatMonitoring MySQL Replication lag with Prometheus & pt-heartbeat
Monitoring MySQL Replication lag with Prometheus & pt-heartbeat
Julien Pivotto
 
Fluentd 101
Fluentd 101Fluentd 101
Fluentd 101
SATOSHI TAGOMORI
 
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
ShapeBlue
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101
Weaveworks
 
Prometheus in openstack-helm
Prometheus in openstack-helmPrometheus in openstack-helm
Prometheus in openstack-helm
성일 임
 
20명 규모의 팀에서 Vault 사용하기
20명 규모의 팀에서 Vault 사용하기20명 규모의 팀에서 Vault 사용하기
20명 규모의 팀에서 Vault 사용하기
Doyoon Kim
 
GitLab과 Kubernetes를 통한 CI/CD 구축
GitLab과 Kubernetes를 통한 CI/CD 구축GitLab과 Kubernetes를 통한 CI/CD 구축
GitLab과 Kubernetes를 통한 CI/CD 구축
철구 김
 
Ansible 101
Ansible 101Ansible 101
Ansible 101
Gena Mykhailiuta
 
Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...
Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...
Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...
Felipe Blini
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals Training
Piotr Perzyna
 
OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27
Kentaro Ebisawa
 
nexus helm 설치, docker/helm repo 설정과 예제
nexus helm 설치, docker/helm repo 설정과 예제nexus helm 설치, docker/helm repo 설정과 예제
nexus helm 설치, docker/helm repo 설정과 예제
choi sungwook
 
Monitoring kubernetes with prometheus
Monitoring kubernetes with prometheusMonitoring kubernetes with prometheus
Monitoring kubernetes with prometheus
Brice Fernandes
 

What's hot (20)

Zookeeper 활용 nifi clustering
Zookeeper 활용 nifi clusteringZookeeper 활용 nifi clustering
Zookeeper 활용 nifi clustering
 
Openv switchの使い方とか
Openv switchの使い方とかOpenv switchの使い方とか
Openv switchの使い方とか
 
Scale Kubernetes to support 50000 services
Scale Kubernetes to support 50000 servicesScale Kubernetes to support 50000 services
Scale Kubernetes to support 50000 services
 
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
 
Keystone fernet token
Keystone fernet tokenKeystone fernet token
Keystone fernet token
 
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
 
Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)
Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)
Custom DevOps Monitoring System in MelOn (with InfluxDB + Telegraf + Grafana)
 
Monitoring MySQL Replication lag with Prometheus & pt-heartbeat
Monitoring MySQL Replication lag with Prometheus & pt-heartbeatMonitoring MySQL Replication lag with Prometheus & pt-heartbeat
Monitoring MySQL Replication lag with Prometheus & pt-heartbeat
 
Fluentd 101
Fluentd 101Fluentd 101
Fluentd 101
 
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101
 
Prometheus in openstack-helm
Prometheus in openstack-helmPrometheus in openstack-helm
Prometheus in openstack-helm
 
20명 규모의 팀에서 Vault 사용하기
20명 규모의 팀에서 Vault 사용하기20명 규모의 팀에서 Vault 사용하기
20명 규모의 팀에서 Vault 사용하기
 
GitLab과 Kubernetes를 통한 CI/CD 구축
GitLab과 Kubernetes를 통한 CI/CD 구축GitLab과 Kubernetes를 통한 CI/CD 구축
GitLab과 Kubernetes를 통한 CI/CD 구축
 
Ansible 101
Ansible 101Ansible 101
Ansible 101
 
Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...
Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...
Monitoramento de serviços com Zabbix + Grafana + Python - Marcelo Santoto - D...
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals Training
 
OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27
 
nexus helm 설치, docker/helm repo 설정과 예제
nexus helm 설치, docker/helm repo 설정과 예제nexus helm 설치, docker/helm repo 설정과 예제
nexus helm 설치, docker/helm repo 설정과 예제
 
Monitoring kubernetes with prometheus
Monitoring kubernetes with prometheusMonitoring kubernetes with prometheus
Monitoring kubernetes with prometheus
 

Similar to Prometheus on NKS

kubernetes practice
kubernetes practicekubernetes practice
kubernetes practice
wonyong hwang
 
AWS 기반 Docker, Kubernetes
AWS 기반 Docker, KubernetesAWS 기반 Docker, Kubernetes
AWS 기반 Docker, Kubernetes
정빈 권
 
Kubernetes installation
Kubernetes installationKubernetes installation
Kubernetes installation
Ahmed Mekawy
 
Kubernetes Basic Operation
Kubernetes Basic OperationKubernetes Basic Operation
Kubernetes Basic Operation
Simon Su
 
k8s practice 2023.pptx
k8s practice 2023.pptxk8s practice 2023.pptx
k8s practice 2023.pptx
wonyong hwang
 
Learning kubernetes
Learning kubernetesLearning kubernetes
Learning kubernetes
Eueung Mulyana
 
Istio Playground
Istio PlaygroundIstio Playground
Istio Playground
QAware GmbH
 
Multinode kubernetes-cluster
Multinode kubernetes-clusterMultinode kubernetes-cluster
Multinode kubernetes-cluster
Ram Nath
 
青云CoreOS虚拟机部署kubernetes
青云CoreOS虚拟机部署kubernetes 青云CoreOS虚拟机部署kubernetes
青云CoreOS虚拟机部署kubernetes
Zhichao Liang
 
Continuous Delivery: The Next Frontier
Continuous Delivery: The Next FrontierContinuous Delivery: The Next Frontier
Continuous Delivery: The Next Frontier
Carlos Sanchez
 
Ports, pods and proxies
Ports, pods and proxiesPorts, pods and proxies
Ports, pods and proxies
LibbySchulze
 
Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)
HungWei Chiu
 
Cloud-native applications with Java and Kubernetes - Yehor Volkov
 Cloud-native applications with Java and Kubernetes - Yehor Volkov Cloud-native applications with Java and Kubernetes - Yehor Volkov
Cloud-native applications with Java and Kubernetes - Yehor Volkov
Kuberton
 
MeaNstack on Docker
MeaNstack on DockerMeaNstack on Docker
MeaNstack on Docker
Daniel Ku
 
Artem Zhurbila - docker clusters (solit 2015)
Artem Zhurbila - docker clusters (solit 2015)Artem Zhurbila - docker clusters (solit 2015)
Artem Zhurbila - docker clusters (solit 2015)
Artem Zhurbila
 
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachJDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
PROIDEA
 
Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...
Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...
Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...
Michael Man
 
Kubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionKubernetes Architecture and Introduction
Kubernetes Architecture and Introduction
Stefan Schimanski
 
Kubernetes
KubernetesKubernetes
Kubernetes
Meng-Ze Lee
 
Social Connections 14 - Kubernetes Basics for Connections Admins
Social Connections 14 - Kubernetes Basics for Connections AdminsSocial Connections 14 - Kubernetes Basics for Connections Admins
Social Connections 14 - Kubernetes Basics for Connections Admins
panagenda
 

Similar to Prometheus on NKS (20)

kubernetes practice
kubernetes practicekubernetes practice
kubernetes practice
 
AWS 기반 Docker, Kubernetes
AWS 기반 Docker, KubernetesAWS 기반 Docker, Kubernetes
AWS 기반 Docker, Kubernetes
 
Kubernetes installation
Kubernetes installationKubernetes installation
Kubernetes installation
 
Kubernetes Basic Operation
Kubernetes Basic OperationKubernetes Basic Operation
Kubernetes Basic Operation
 
k8s practice 2023.pptx
k8s practice 2023.pptxk8s practice 2023.pptx
k8s practice 2023.pptx
 
Learning kubernetes
Learning kubernetesLearning kubernetes
Learning kubernetes
 
Istio Playground
Istio PlaygroundIstio Playground
Istio Playground
 
Multinode kubernetes-cluster
Multinode kubernetes-clusterMultinode kubernetes-cluster
Multinode kubernetes-cluster
 
青云CoreOS虚拟机部署kubernetes
青云CoreOS虚拟机部署kubernetes 青云CoreOS虚拟机部署kubernetes
青云CoreOS虚拟机部署kubernetes
 
Continuous Delivery: The Next Frontier
Continuous Delivery: The Next FrontierContinuous Delivery: The Next Frontier
Continuous Delivery: The Next Frontier
 
Ports, pods and proxies
Ports, pods and proxiesPorts, pods and proxies
Ports, pods and proxies
 
Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)
 
Cloud-native applications with Java and Kubernetes - Yehor Volkov
 Cloud-native applications with Java and Kubernetes - Yehor Volkov Cloud-native applications with Java and Kubernetes - Yehor Volkov
Cloud-native applications with Java and Kubernetes - Yehor Volkov
 
MeaNstack on Docker
MeaNstack on DockerMeaNstack on Docker
MeaNstack on Docker
 
Artem Zhurbila - docker clusters (solit 2015)
Artem Zhurbila - docker clusters (solit 2015)Artem Zhurbila - docker clusters (solit 2015)
Artem Zhurbila - docker clusters (solit 2015)
 
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz LachJDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
JDO 2019: Tips and Tricks from Docker Captain - Łukasz Lach
 
Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...
Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...
Control Plane: Continuous Kubernetes Security (DevSecOps - London Gathering, ...
 
Kubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionKubernetes Architecture and Introduction
Kubernetes Architecture and Introduction
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Social Connections 14 - Kubernetes Basics for Connections Admins
Social Connections 14 - Kubernetes Basics for Connections AdminsSocial Connections 14 - Kubernetes Basics for Connections Admins
Social Connections 14 - Kubernetes Basics for Connections Admins
 

More from Jo Hoon

이스티오 (Istio) 자습서 v0.5.0
이스티오 (Istio) 자습서 v0.5.0이스티오 (Istio) 자습서 v0.5.0
이스티오 (Istio) 자습서 v0.5.0
Jo Hoon
 
[GitOps] Argo CD on GKE (v0.9.2).pdf
[GitOps] Argo CD on GKE (v0.9.2).pdf[GitOps] Argo CD on GKE (v0.9.2).pdf
[GitOps] Argo CD on GKE (v0.9.2).pdf
Jo Hoon
 
[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요
[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요
[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요
Jo Hoon
 
CDN on GKE with Ingress
CDN on GKE with IngressCDN on GKE with Ingress
CDN on GKE with Ingress
Jo Hoon
 
The myths of deprecating docker in kubernetes
The myths of deprecating docker in kubernetesThe myths of deprecating docker in kubernetes
The myths of deprecating docker in kubernetes
Jo Hoon
 
온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)
온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)
온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)
Jo Hoon
 
[네전따 27회] 네트워크 자동화 어렵지 않아요
[네전따 27회] 네트워크 자동화 어렵지 않아요[네전따 27회] 네트워크 자동화 어렵지 않아요
[네전따 27회] 네트워크 자동화 어렵지 않아요
Jo Hoon
 
[Cook book] ansible 4_dell emc networking
[Cook book] ansible 4_dell emc networking[Cook book] ansible 4_dell emc networking
[Cook book] ansible 4_dell emc networking
Jo Hoon
 
Wiki academy sysadmin 10_day
Wiki academy sysadmin 10_dayWiki academy sysadmin 10_day
Wiki academy sysadmin 10_day
Jo Hoon
 
Wiki academy sysadmin 4_day
Wiki academy sysadmin 4_dayWiki academy sysadmin 4_day
Wiki academy sysadmin 4_day
Jo Hoon
 
Wiki academy sysadmin 3_day
Wiki academy sysadmin 3_dayWiki academy sysadmin 3_day
Wiki academy sysadmin 3_day
Jo Hoon
 
Wiki academy sysadmin 2_day
Wiki academy sysadmin 2_dayWiki academy sysadmin 2_day
Wiki academy sysadmin 2_day
Jo Hoon
 
Wiki academy sysadmin 1_day
Wiki academy sysadmin 1_dayWiki academy sysadmin 1_day
Wiki academy sysadmin 1_day
Jo Hoon
 
Wiki academy sysadmin 9_day
Wiki academy sysadmin 9_dayWiki academy sysadmin 9_day
Wiki academy sysadmin 9_day
Jo Hoon
 
Wiki academy sysadmin 8_day
Wiki academy sysadmin 8_dayWiki academy sysadmin 8_day
Wiki academy sysadmin 8_day
Jo Hoon
 
Wiki academy sysadmin 7_day
Wiki academy sysadmin 7_dayWiki academy sysadmin 7_day
Wiki academy sysadmin 7_day
Jo Hoon
 
Wiki academy sysadmin 6_day
Wiki academy sysadmin 6_dayWiki academy sysadmin 6_day
Wiki academy sysadmin 6_day
Jo Hoon
 
Wiki academy sysadmin 5_day
Wiki academy sysadmin 5_dayWiki academy sysadmin 5_day
Wiki academy sysadmin 5_day
Jo Hoon
 
[Ansible] open network automation
[Ansible] open network automation[Ansible] open network automation
[Ansible] open network automation
Jo Hoon
 
[Fs8600] nas session validation test_by_hoon_jo
[Fs8600] nas session validation test_by_hoon_jo[Fs8600] nas session validation test_by_hoon_jo
[Fs8600] nas session validation test_by_hoon_jo
Jo Hoon
 

More from Jo Hoon (20)

이스티오 (Istio) 자습서 v0.5.0
이스티오 (Istio) 자습서 v0.5.0이스티오 (Istio) 자습서 v0.5.0
이스티오 (Istio) 자습서 v0.5.0
 
[GitOps] Argo CD on GKE (v0.9.2).pdf
[GitOps] Argo CD on GKE (v0.9.2).pdf[GitOps] Argo CD on GKE (v0.9.2).pdf
[GitOps] Argo CD on GKE (v0.9.2).pdf
 
[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요
[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요
[네전따] 네트워크 엔지니어에게 쿠버네티스는 어떤 의미일까요
 
CDN on GKE with Ingress
CDN on GKE with IngressCDN on GKE with Ingress
CDN on GKE with Ingress
 
The myths of deprecating docker in kubernetes
The myths of deprecating docker in kubernetesThe myths of deprecating docker in kubernetes
The myths of deprecating docker in kubernetes
 
온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)
온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)
온프레미스 쿠버네티스에서도 로드밸런서를 (w MetalLB)
 
[네전따 27회] 네트워크 자동화 어렵지 않아요
[네전따 27회] 네트워크 자동화 어렵지 않아요[네전따 27회] 네트워크 자동화 어렵지 않아요
[네전따 27회] 네트워크 자동화 어렵지 않아요
 
[Cook book] ansible 4_dell emc networking
[Cook book] ansible 4_dell emc networking[Cook book] ansible 4_dell emc networking
[Cook book] ansible 4_dell emc networking
 
Wiki academy sysadmin 10_day
Wiki academy sysadmin 10_dayWiki academy sysadmin 10_day
Wiki academy sysadmin 10_day
 
Wiki academy sysadmin 4_day
Wiki academy sysadmin 4_dayWiki academy sysadmin 4_day
Wiki academy sysadmin 4_day
 
Wiki academy sysadmin 3_day
Wiki academy sysadmin 3_dayWiki academy sysadmin 3_day
Wiki academy sysadmin 3_day
 
Wiki academy sysadmin 2_day
Wiki academy sysadmin 2_dayWiki academy sysadmin 2_day
Wiki academy sysadmin 2_day
 
Wiki academy sysadmin 1_day
Wiki academy sysadmin 1_dayWiki academy sysadmin 1_day
Wiki academy sysadmin 1_day
 
Wiki academy sysadmin 9_day
Wiki academy sysadmin 9_dayWiki academy sysadmin 9_day
Wiki academy sysadmin 9_day
 
Wiki academy sysadmin 8_day
Wiki academy sysadmin 8_dayWiki academy sysadmin 8_day
Wiki academy sysadmin 8_day
 
Wiki academy sysadmin 7_day
Wiki academy sysadmin 7_dayWiki academy sysadmin 7_day
Wiki academy sysadmin 7_day
 
Wiki academy sysadmin 6_day
Wiki academy sysadmin 6_dayWiki academy sysadmin 6_day
Wiki academy sysadmin 6_day
 
Wiki academy sysadmin 5_day
Wiki academy sysadmin 5_dayWiki academy sysadmin 5_day
Wiki academy sysadmin 5_day
 
[Ansible] open network automation
[Ansible] open network automation[Ansible] open network automation
[Ansible] open network automation
 
[Fs8600] nas session validation test_by_hoon_jo
[Fs8600] nas session validation test_by_hoon_jo[Fs8600] nas session validation test_by_hoon_jo
[Fs8600] nas session validation test_by_hoon_jo
 

Recently uploaded

A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
Rohit Gautam
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 

Recently uploaded (20)

A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 

Prometheus on NKS

  • 1. Prometheus on NKS 가이드 문서 📌QA test Region on (KR / 한국) https://github.com/sysnet4admin
  • 2. Helm v3.10.3 설치 1.helm binary 설치 확인 (헬름 설치가 안되 있는 경우 설치를 우선 진행) root@k8s-console:~# helm version WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"} ❗만약 insecure 메시지를 보고 싶지 않다면... root@k8s-console:~# chmod 700 ~/.kube/config root@k8s-console:~# helm version --short v3.10.3+g835b733 헬름을 통한 Prometheus 배포를 위한 사전 작업 1.프로메테우스 설치를 위한 헬름 레포를 추가 root@k8s-console:~# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts "prometheus-community" has been added to your repositories 2.레포에서 최신 내용을 받아 업데이트 root@k8s-console:~# helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "prometheus-community" chart repository Update Complete. ⎈Happy Helming!⎈ 3.사전 구성된 스토리지클래스 확인 root@k8s-console:~# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nks-block-storage (default) blk.csi.ncloud.com Delete WaitForFirstConsumer true 17d nks-nas-csi nas.csi.ncloud.com Delete WaitForFirstConsumer true 17d
  • 3. Prometheus 배포 1.헬름을 통해서 NKS에 프로메테우스 배포 root@k8s-console:~# helm install prometheus prometheus-community/prometheus --set server.service.type="LoadBalancer" --namespace=monitoring --create-namespace WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME: prometheus LAST DEPLOYED: Sat Dec 17 17:03:41 2022 NAMESPACE: monitoring STATUS: deployed REVISION: 1 NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.monitoring.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc --namespace monitoring -w prometheus-server' export SERVICE_IP=$(kubectl get svc --namespace monitoring prometheus-server -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo http://$SERVICE_IP:80 The Prometheus alertmanager can be accessed via port on the following DNS name from within your cluster: prometheus-%!s(<nil>).monitoring.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=" -o jsonpath="{.items[0].metadata.name}")
  • 4. kubectl --namespace monitoring port-forward $POD_NAME 9093 ######################################################################## ######### ###### WARNING: Pod Security Policy has been disabled by default since ##### ###### it deprecated after k8s 1.25+. use ##### ###### (index .Values "prometheus-node-exporter" "rbac" ##### ###### . "pspEnabled") with (index .Values ##### ###### "prometheus-node-exporter" "rbac" "pspAnnotations") ##### ###### in case you still need it. ##### ######################################################################## ######### The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: prometheus-prometheus-pushgateway.monitoring.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus-pushgateway,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace monitoring port-forward $POD_NAME 9091 For more information on running Prometheus, visit: https://prometheus.io/ ❗만약 storageclass를 nks-block-storage가 아닌 다른 스토리지를 쓰고 싶다면 다음을 참조하세요 helm install prometheus prometheus-community/prometheus --set alertmanager.persistentVolume.storageClass="nks-block-storage" --set server.persistentVolume.storageClass="nks-block-storage" --set server.service.type="LoadBalancer" --namespace=monitoring --create-namespace
  • 5. 2.배포된 pods와 services 확인 root@k8s-console:~# kubectl get po,svc -n monitoring NAME READY STATUS RESTARTS AGE pod/prometheus-alertmanager-0 1/1 Running 0 3m37s pod/prometheus-kube-state-metrics-7cdcf7cc98-rsgcr 1/1 Running 0 3m37s pod/prometheus-prometheus-node-exporter-5qpn4 1/1 Running 0 3m37s pod/prometheus-prometheus-pushgateway-959d84d7f-8ztlm 1/1 Running 0 3m37s pod/prometheus-server-54956c9cfb-wlvms 2/2 Running 0 3m37s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/prometheus-alertmanager ClusterIP 198.19.133.139 <none> 9093/TCP 3m38s service/prometheus-alertmanager-headless ClusterIP None <none> 9093/TCP 3m38s service/prometheus-kube-state-metrics ClusterIP 198.19.185.119 <none> 8080/TCP 3m37s service/prometheus-prometheus-node-exporter ClusterIP 198.19.252.64 <none> 9100/TCP 3m37s service/prometheus-prometheus-pushgateway ClusterIP 198.19.193.200 <none> 9091/TCP 3m37s service/prometheus-server LoadBalancer 198.19.178.17 monitoring-prometheus-se-18ca9-15174488-e4dd7137207d.kr.lb.naverncp.com 80:32534/TCP 3m38s 3.배포된 프로메테우스 확인
  • 6. 4.조회된 메트릭 데이터 확인 5.배포된 프로메테우스 조회 및 삭제 root@k8s-console:~# helm list -n monitoring NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION prometheus monitoring 1 2022-12-17 17:03:41.29034263 +0900 KST deployed prometheus-19.0.2 v2.40.5 root@k8s-console:~# helm uninstall prometheus -n monitoring release "prometheus" uninstalled 6.삭제된 프로메테우스 리소스 확인 root@k8s-console:~# helm list -n monitoring NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION root@k8s-console:~# root@k8s-console:~# kubectl get po,svc -n monitoring No resources found in monitoring namespace.
  • 7. Kube Prometheus Stack (이하 프로메테우스 스택) 배포 1.헬름을 통해서 NKS에 프로메테우스 스택 배포 root@k8s-console:~# helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --set prometheus.service.type=LoadBalancer --set grafana.service.type=LoadBalancer --namespace=monitoring --create-namespace NAME: kube-prometheus-stack LAST DEPLOYED: Sat Dec 17 17:14:15 2022 NAMESPACE: monitoring STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. 2.배포된 pods와 services 확인 root@k8s-console:~# kubectl get po,svc -n monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 1 (104s ago) 105s pod/kube-prometheus-stack-grafana-77fd7cc8ff-57tp5 3/3 Running 0 114s pod/kube-prometheus-stack-kube-state-metrics-579bf68b5-rj5ff 1/1 Running 0 114s pod/kube-prometheus-stack-operator-64bc8bd9fd-2ggrs 1/1 Running 0 114s pod/kube-prometheus-stack-prometheus-node-exporter-rv8b5 1/1 Running 0 115s pod/prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 105s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 105s service/kube-prometheus-stack-alertmanager ClusterIP 198.19.250.205 <none> 9093/TCP 115s service/kube-prometheus-stack-grafana LoadBalancer 198.19.171.157 monitoring-kube-promethe-4b1de-15174529-f0806941ff3d.kr.lb.naverncp.com 80:31512/TCP 115s service/kube-prometheus-stack-kube-state-metrics ClusterIP 198.19.173.244 <none> 8080/TCP 115s service/kube-prometheus-stack-operator ClusterIP 198.19.134.58 <none> 443/TCP 115s service/kube-prometheus-stack-prometheus LoadBalancer 198.19.233.72 monitoring-kube-promethe-5d777-15174528-c0eedcb927a3.kr.lb.naverncp.com 9090:32176/TCP
  • 8. 115s service/kube-prometheus-stack-prometheus-node-exporter ClusterIP 198.19.202.67 <none> 9100/TCP 115s service/prometheus-operated ClusterIP None <none> 9090/TCP 105s ❗현재 프로메테우스 스택의 큰 문제점 ? 프로메테우스 배포에는 다음과 같이 default로 storageclass(nks-block-storage)를 통해서 pv와 pvc가 생성됩니다. root@k8s-console:~# kubectl get pv -n monitoring CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0d5a8305acee499e8a0d57245a 10Gi RWO Delete Bound monitoring/storage-prometheus-alertmanager-0 nks-block-storage 9m42s pvc-6ae9e2442da2475295da9b1050 10Gi RWO Delete Bound monitoring/prometheus-server nks-block-storage 9m44s root@k8s-console:~# kubectl get pvc -n monitoring NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-server Bound pvc-6ae9e2442da2475295da9b1050 10Gi RWO nks-block-storage 10m storage-prometheus-alertmanager-0 Bound pvc-0d5a8305acee499e8a0d57245a 10Gi RWO nks-block-storage 10m 그러나 프로메테우스 스택에서 storageclass를 지정해 주지 않으면 다음과 같이 pv,pvc를 이용하는 것이 아니라 emptyDir를 이용해서 임시로만 사용하도록 배포 됩니다. root@k8s-console:~# kubectl get pv,pvc -n monitoring | grep prometheus-server root@k8s-console:~# root@k8s-console:~# kubectl get po -n monitoring prometheus-kube-prometheus-stack-prometheus-0 -o yaml | grep volumes -A30 volumes: - name: config secret: defaultMode: 420 secretName: prometheus-kube-prometheus-stack-prometheus - name: tls-assets projected:
  • 9. defaultMode: 420 sources: - secret: name: prometheus-kube-prometheus-stack-prometheus-tls-assets-0 - emptyDir: {} name: config-out - configMap: defaultMode: 420 name: prometheus-kube-prometheus-stack-prometheus-rulefiles-0 name: prometheus-kube-prometheus-stack-prometheus-rulefiles-0 - name: web-config secret: defaultMode: 420 secretName: prometheus-kube-prometheus-stack-prometheus-web-config - emptyDir: {} name: prometheus-kube-prometheus-stack-prometheus-db - name: kube-api-access-g8rvd projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: <snipped> 따라서 현업 관점에서는 storageclass가 사용되도록 설정을 해줘야 하며, 이는 value.yaml을 통해서 추가 설정 배포 되어야 합니다. (또는 차트를 fork하고 새로 고쳐야함) 이는 다음의 링크를 참조하시기 바랍니다. 프로메테우스: https://github.com/prometheus-community/helm-charts/issues/186 그라파나: https://github.com/prometheus-community/helm-charts/issues/436 헬름value관련: https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing 만약 정말하고 싶다면….부록1을 참고하세요
  • 10. 3.배포된 프로메테우스 확인 ❗scapeInterval 시간을 배포 후에 변경하기를 원한다면 $ kubectl get prometheus -n monitoring -o yaml | nl | grep scrap 57 scrapeInterval: 30s $ kubectl edit prometheus -n monitoring prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus edited $ kubectl get prometheus -n monitoring -o yaml | nl | grep scrap 57 scrapeInterval: 2m
  • 11. 4.배포된 그라파나 확인 및 로그인 ID: admin Password: prom-operator 5.미리 설정된 데이터 소스가 프로메테우스인지 확인
  • 12. 6. 미리 만들어진 대시보드를 불러오기 위해 13770을 import 메뉴에 입력 7.Data Source를 프로메테우스로 선택하고 import 누름
  • 13. 8.import 된 13770을 감상 및 N/A와 No data 수정 9.(필요시) 배포된 프로메테우스 스택 조회 및 삭제 root@k8s-console:~# helm list -n monitoring NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION kube-prometheus-stack monitoring 1 2022-12-17 17:14:15.264607955 +0900 KST deployed kube-prometheus-stack-43.1.1 0.61.1 root@k8s-console:~# helm uninstall -n monitoring kube-prometheus-stack release "kube-prometheus-stack" uninstalled
  • 14. 부록1 1.helm inspect로 values 파일 생성 $ helm inspect values prometheus-community/kube-prometheus-stack --version 43.1.1 > kube-prometheus-stack-43.1.1.values 2. 생성된 values 파일에 필요 내용 추가 및 수정 라인 번호는 실행 시점 및 수정 순서에 따라 다소 차이가 있을 수도 있습니다. 참고로 라인 번호는 vi 실행 이후에 :set nu로 표시할 수 있습니다. 수정 542 ## Storage is the definition of how storage will be used by the Alertmanager instances. 543 ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Doc umentation/user-guides/storage.md 544 ## 545 storage: 546 volumeClaimTemplate: 547 spec: 548 storageClassName: nks-block-storage 549 accessModes: ["ReadWriteOnce"] 550 resources: 551 requests: 552 storage: 50Gi 553 # selector: {} 추가 697 ## Using default values from https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.y aml 698 ## 699 grafana: 700 enabled: true 701 namespaceOverride: "" 702 703 # override configuration by hoon 704 persistence: 705 enabled: true 706 type: pvc
  • 15. 707 storageClassName: nks-block-storage 708 accessModes: 709 - ReadWriteOnce 710 size: 100Gi 711 finalizers: 712 - kubernetes.io/pvc-protection 수정 726 ## Timezone for the default dashboards 727 ## Other options are: browser or a specific timezone, i.e. Europe/Luxembourg 728 ## 729 defaultDashboardsTimezone: utc 730 731 adminPassword: admin 732 수정 2580 ## Prometheus StorageSpec for persistent data 2581 ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Doc umentation/user-guides/storage.md 2582 ## 2583 storageSpec: 2584 ## Using PersistentVolumeClaim 2585 ## 2586 volumeClaimTemplate: 2587 spec: 2588 storageClassName: nks-block-storage 2589 accessModes: ["ReadWriteOnce"] 2590 resources: 2591 requests: 2592 storage: 50Gi 2593 # selector: {} 3.helm install 실행 root@k8s-console:~# helm install prometheus-community/kube-prometheus-stack
  • 16. --set prometheus.service.type=LoadBalancer --set grafana.service.type=LoadBalancer --create-namespace --namespace monitoring --generate-name --values kube-prometheus-stack-43.1.1.values NAME: kube-prometheus-stack-1671267408 LAST DEPLOYED: Sat Dec 17 17:56:49 2022 NAMESPACE: monitoring STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack-1671267408" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. 4.변경된 값이 있는 values를 통해서 생성된 프로메테우스 스택 확인 root@k8s-console:~# kubectl get po,svc,pv,pvc -n monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-kube-prometheus-stack-1671-alertmanager-0 2/2 Running 1 (24s ago) 36s pod/kube-prometheus-stack-1671-operator-696ddf996d-2tbft 1/1 Running 0 37s pod/kube-prometheus-stack-1671267408-grafana-75cf5cff79-hrs59 3/3 Running 0 37s pod/kube-prometheus-stack-1671267408-kube-state-metrics-7b44cdrf8q9 1/1 Running 0 37s pod/kube-prometheus-stack-1671267408-prometheus-node-exporter-npmpk 1/1 Running 0 37s pod/prometheus-kube-prometheus-stack-1671-prometheus-0 2/2 Running 0 35s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 36s service/kube-prometheus-stack-1671-alertmanager ClusterIP 198.19.141.183 <none> 9093/TCP 37s service/kube-prometheus-stack-1671-operator ClusterIP 198.19.249.190 <none> 443/TCP 37s service/kube-prometheus-stack-1671-prometheus LoadBalancer 198.19.189.46 monitoring-kube-promethe-94513-15174705-1fbb6ff1467d.kr.lb.naverncp.com 9090:30008/TCP 37s service/kube-prometheus-stack-1671267408-grafana LoadBalancer 198.19.206.4 <pending> 80:31398/TCP 37s service/kube-prometheus-stack-1671267408-kube-state-metrics ClusterIP 198.19.225.152 <none> 8080/TCP 37s service/kube-prometheus-stack-1671267408-prometheus-node-exporter ClusterIP 198.19.191.119 <none> 9100/TCP 37s service/prometheus-operated ClusterIP None <none> 9090/TCP 35s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-7c195a1da23d4755b21b6ed2db 50Gi RWO Delete Bound monitoring/prometheus-kube-prometheus-stack-1671-prometheus-db-prometheus-kube-prometheus-stack-1671-prometheus-0 nks-block-storage 33s persistentvolume/pvc-8c1c8c896efb40b6af8fe82a42 50Gi RWO Delete Bound monitoring/alertmanager-kube-prometheus-stack-1671-alertmanager-db-alertmanager-kube-prometheus-stack-1671-alertma
  • 17. nager-0 nks-block-storage 34s persistentvolume/pvc-c4ba41508e4d4914a1f255f0ae 100Gi RWO Delete Bound monitoring/kube-prometheus-stack-1671267408-grafana nks-block-storage 36s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/alertmanager-kube-prometheus-stack-1671-alertmanager-db-alertmanager-kube-prometheus-stack-1 671-alertmanager-0 Bound pvc-8c1c8c896efb40b6af8fe82a42 50Gi RWO nks-block-storage 36s persistentvolumeclaim/kube-prometheus-stack-1671267408-grafana Bound pvc-c4ba41508e4d4914a1f255f0ae 100Gi RWO nks-block-storage 38s persistentvolumeclaim/prometheus-kube-prometheus-stack-1671-prometheus-db-prometheus-kube-prometheus-stack-1671-pr ometheus-0 Bound pvc-7c195a1da23d4755b21b6ed2db 50Gi RWO nks-block-storage 35s 레퍼런스: https://1week.tistory.com/43 https://passwd.tistory.com/entry/Helm-kube-prometheus-stack-Grafana-Persistence-%ED%9 9%9C%EC%84%B1%ED%99%94 https://github.com/prometheus-community/helm-charts/issues/113