无邪的大熊猫 · springboot k8s - CSDN文库· 1 月前 · |
道上混的沙发 · Python ...· 7 月前 · |
谈吐大方的茶叶 · Frame(非JFrame)如何在按关闭按钮 ...· 1 年前 · |
暴躁的奔马 · WebAssembly 入门实战 Rust ...· 1 年前 · |
主机名 |
IP |
备注 |
---|---|---|
k8s-master01 |
192.168.1.21 |
主集群1 |
k8s-master02 |
192.168.1.22 |
主集群2 |
k8s-master03 |
192.168.1.23 |
主集群3 |
k8s-node01 |
192.168.1.24 |
节点 |
vip |
192.168.1.100 |
keepalive-的虚拟ip |
建议内存最低2G或以上
OS:CentOS Linux release 7.4.1708 (Core)
软件版本:
docker17.03.2-ce socat-1.7.3.2-2.el7.x86_64 kubelet-1.10.0-0.x86_64 kubernetes-cni-0.6.0-0.x86_64 kubectl-1.10.0-0.x86_64 kubeadm-1.10.0-0.x86_64
hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01
/etc/hosts/
cat <<EOF > /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.21 k8s-master01 192.168.1.22 k8s-master02 192.168.1.23 k8s-master03 192.168.1.24 k8s-node01 EOF
ssh-keygen #一路回车即可 ssh-copy-id k8s-master02 #这里需要输入 yes和密码 ssh-copy-id k8s-master03 ssh-copy-id k8s-node01
# 停防火墙 systemctl stop firewalld systemctl disable firewalld #关闭Swap swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab #关闭Selinux setenforce 0 sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config #加载br_netfilter modprobe br_netfilter #添加配置内核参数 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 #加载配置 sysctl -p /etc/sysctl.d/k8s.conf #查看是否生成相关文件 ls /proc/sys/net/bridge # 添加K8S的国内yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg #安装依赖包以及相关工具 yum install -y epel-release yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl #配置ntp(配置完后建议重启一次) systemctl enable ntpdate.service echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp crontab /tmp/crontab2.tmp systemctl start ntpdate.service # /etc/security/limits.conf 是 Linux 资源使用配置文件,用来限制用户对系统资源的使用 echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >> /etc/security/limits.conf echo "* hard nproc 65536" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf
yum install -y keepalived systemctl enable keepalived
k8s-master01的
keepalived.conf
,配置文件有几点需要注意的,在下面有补充。
cat <<EOF > /etc/keepalived/keepalived.conf global_defs { router_id LVS_k8s vrrp_script CheckK8sMaster { script "curl -k https://192.168.1.100:6443" interval 3 timeout 9 fall 2 rise 2 vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 61 priority 100 advert_int 1 mcast_src_ip 192.168.1.21 nopreempt authentication { auth_type PASS auth_pass sqP05dQgMSlzrxHj unicast_peer { 192.168.1.22 192.168.1.23 virtual_ipaddress { 192.168.1.100/24 track_script { CheckK8sMaster EOF
k8s-master02的
keepalived.conf
cat <<EOF > /etc/keepalived/keepalived.conf global_defs { router_id LVS_k8s vrrp_script CheckK8sMaster { script "curl -k https://192.168.1.100:6443" interval 3 timeout 9 fall 2 rise 2 vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 61 priority 90 advert_int 1 mcast_src_ip 192.168.1.22 nopreempt authentication { auth_type PASS auth_pass sqP05dQgMSlzrxHj unicast_peer { 192.168.1.21 192.168.1.23 virtual_ipaddress { 192.168.1.100/24 track_script { CheckK8sMaster EOF
k8s-master03的
keepalived.conf
cat <<EOF > /etc/keepalived/keepalived.conf global_defs { router_id LVS_k8s vrrp_script CheckK8sMaster { script "curl -k https://192.168.1.100:6443" interval 3 timeout 9 fall 2 rise 2 vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 61 priority 80 advert_int 1 mcast_src_ip 192.168.1.23 nopreempt authentication { auth_type PASS auth_pass sqP05dQgMSlzrxHj unicast_peer { 192.168.1.21 192.168.1.22 virtual_ipaddress { 192.168.1.100/24 track_script { CheckK8sMaster EOF
提示
在本节中vrrp_instance VI_1,根据你的设置更改几行:
state:实例角色。分为一个MASTER和一(多)个BACKUP。 interface:VIP所绑定的网卡,指定处理VRRP多播协议包的网卡。 priority:优先级初始值,竞选MASTER用到,有效范围为0-255 auth_pass:这里使用任何随机字符串。 virtual_ipaddresses: VIP mcast_src_ip:本机IP地址 对于keepalived配置的详细参数可以百度。
重启读取配置
systemctl restart keepalived
先启动k8s-master01,这样vip就会先绑定到master。
[root@k8s-master01 new]# ip addr 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:80:cc:41 brd ff:ff:ff:ff:ff:ff inet 192.168.1.24/24 brd 192.168.1.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.1.100/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe80:cc41/64 scope link valid_lft forever preferred_lft forever
再依次启动其他节点。
kubernetes
系统各组件需要使用
TLS
证书对通信进行加密,本文档使用
CloudFlare
的 PKI 工具集
cfssl
来生成 Certificate Authority (CA) 证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它 TLS 证书。
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl* export PATH=/root/local/bin:$PATH
mkdir /root/ssl cd /root/ssl cat > ca-config.json <<EOF "signing": { "default": { "expiry": "8760h" "profiles": { "kubernetes-Soulmate": { "usages": [ "signing", "key encipherment", "server auth", "client auth" "expiry": "8760h" cat > ca-csr.json <<EOF "CN": "kubernetes-Soulmate", "key": { "algo": "rsa", "size": 2048 "names": [ "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System" EOF
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca # ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
cat > etcd-csr.json <<EOF "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.1.21", "192.168.1.22", "192.168.1.23" "key": { "algo": "rsa", "size": 2048 "names": [ "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System" EOF
hosts 字段指定授权使用该证书的 etcd 节点 IP;
cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd # 这将导致client.pem和client-key.pem创建。
mkdir -p /etc/etcd/ssl cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/ ssh -n k8s-master02 "mkdir -p /etc/etcd/ssl && exit" ssh -n k8s-master03 "mkdir -p /etc/etcd/ssl && exit" scp -r /etc/etcd/ssl/*.pem k8s-master02:/etc/etcd/ssl/ scp -r /etc/etcd/ssl/*.pem k8s-master03:/etc/etcd/ssl/
现在所有的证书都已经生成,然后在每台master上安装并设置etcd。
yum install -y etcd mkdir -p /var/lib/etcd # 必须先创建工作目录
master01的
etcd.service
cat <<EOF >/etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/bin/etcd \ --name k8s-master01 \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --initial-advertise-peer-urls https://192.168.1.21:2380 \ --listen-peer-urls https://192.168.1.21:2380 \ --listen-client-urls https://192.168.1.21:2379,http://127.0.0.1:2379 \ --advertise-client-urls https://192.168.1.21:2379 \ --initial-cluster-token etcd-cluster-0 \ --initial-cluster k8s-master01=https://192.168.1.21:2380,k8s-master02=https://192.168.1.22:2380,k8s-master03=https://192.168.1.23:2380 \ --initial-cluster-state new \ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
k8s-master02的etcd.service
cat <<EOF >/etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/bin/etcd \ --name k8s-master02 \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --initial-advertise-peer-urls https://192.168.1.22:2380 \ --listen-peer-urls https://192.168.1.22:2380 \ --listen-client-urls https://192.168.1.22:2379,http://127.0.0.1:2379 \ --advertise-client-urls https://192.168.1.22:2379 \ --initial-cluster-token etcd-cluster-0 \ --initial-cluster k8s-master01=https://192.168.1.21:2380,k8s-master02=https://192.168.1.22:2380,k8s-master03=https://192.168.1.23:2380 \ --initial-cluster-state new \ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
k8s-master03的etcd.service
cat <<EOF >/etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/bin/etcd \ --name k8s-master03 \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --initial-advertise-peer-urls https://192.168.1.23:2380 \ --listen-peer-urls https://192.168.1.23:2380 \ --listen-client-urls https://192.168.1.23:2379,http://127.0.0.1:2379 \ --advertise-client-urls https://192.168.1.23:2379 \ --initial-cluster-token etcd-cluster-0 \ --initial-cluster k8s-master01=https://192.168.1.21:2380,k8s-master02=https://192.168.1.22:2380,k8s-master03=https://192.168.1.23:2380 \ --initial-cluster-state new \ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
mv /etc/systemd/system/etcd.service /usr/lib/systemd/system/ systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd
Systemd 默认从目录/etc/systemd/system/读取配置文件。但是,里面存放的大部分文件都是符号链接,指向目录/usr/lib/systemd/system/,真正的配置文件存放在那个目录。 最先启动的 etcd 进程会卡住一段时间,等待其它节点上的 etcd 进程加入集群,为正常现象。
在所有的 etcd 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。
[root@k8s-master01 ~]# curl -L http://127.0.0.1:2379/health {"health": "true"} #返回结果是这个就是正常的
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y
vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=https://ms3cfraz.mirror.aliyuncs.com
systemctl daemon-reload systemctl restart docker systemctl enable docker systemctl status docker
yum install -y kubelet kubeadm kubectl systemctl enable kubelet
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#修改这一行 Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" #添加这一行 Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"
systemctl daemon-reload
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
cat <<EOF > config.yaml apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration etcd: endpoints: - https://192.168.1.21:2379 - https://192.168.1.22:2379 - https://192.168.1.23:2379 caFile: /etc/etcd/ssl/ca.pem certFile: /etc/etcd/ssl/etcd.pem keyFile: /etc/etcd/ssl/etcd-key.pem dataDir: /var/lib/etcd networking: podSubnet: 10.244.0.0/16 kubernetesVersion: 1.10.0 advertiseAddress: "192.168.1.100" token: "b99a00.a144ef80536d4344" tokenTTL: "0s" apiServerCertSANs: - k8s-master01 - k8s-master02 - k8s-master03 - k8s-node01 - 192.168.1.21 - 192.168.1.22 - 192.168.1.23 - 192.168.1.24 - 192.168.1.100 featureGates: CoreDNS: true imageRepository: "registry.cn-hangzhou.aliyuncs.com/k8sth" EOF
kubeadm init --config config.yaml
初始化正常将输出以下内容
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.1.100:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:6ed0d577845d09281b7ff098369d9f88ea4bfc094305893f53fccac3bab01eca
如果失败,请看输出的信息。检查一下keepalived(vip是否生成),etcd等服务(是否正常)
初始化失败后还原
kubeadm reset rm -rf /etc/kubernetes/*.conf rm -rf /etc/kubernetes/manifests/*.yaml docker ps -a |awk '{print $1}' |xargs docker rm -f systemctl stop kubelet
#为使kubectl适用于非root用户,你可以运行这些命令(这也是kubeadm init输出的一部分): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果不执行7.3这步将会有报错运行kubectl命令时将会有如下: The connection to the server localhost:8080 was refused - did you specify the right host or port?
scp -r /etc/kubernetes/pki k8s-master02:/etc/kubernetes/ scp -r /etc/kubernetes/pki k8s-master03:/etc/kubernetes
在执行这一步前,其实可以可以先查看集群的状态了。在没有部署flannel之前,pod,node的状态都是没有跑起来的,因为网络问题,所以这一步很重要。
#查看node node pods 状态 [root@k8s-master01 system]# kubectl get node kNAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 47s v1.10.2 [root@k8s-master01 system]# kubectl get pods --namespace="kube-system" NAME READY STATUS RESTARTS AGE coredns-7997f8864c-fgfph 0/1 Pending 0 44s coredns-7997f8864c-ng2p9 0/1 Pending 0 44s kube-controller-manager-k8s-master01 1/1 Running 0 59s kube-proxy-v8f25 1/1 Running 0 44s kube-scheduler-k8s-master01 1/1 Running 0 52s
安装
#获取kube-flannel.yml wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #生成kube-flannel的pod kubectl create -f kube-flannel.yml #or一步到位 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
部署完flannel就去查看集群的状态就很完美了
[root@k8s-master01 system]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 42m v1.10.2 [root@k8s-master01 system]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7997f8864c-fgfph 1/1 Running 0 41m kube-system coredns-7997f8864c-ng2p9 1/1 Running 0 41m kube-system kube-apiserver-k8s-master01 1/1 Running 0 40m kube-system kube-controller-manager-k8s-master01 1/1 Running 0 41m kube-system kube-flannel-ds-w8xfx 1/1 Running 0 39m kube-system kube-proxy-v8f25 1/1 Running 0 41m kube-system kube-scheduler-k8s-master01 1/1 Running 0 41m
到此k8s的单机集群就部署成功了。
先创建dashboard的yaml文件。
cat <<EOF >kubernetes-dashboard.yaml # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI compatible with # Kubernetes 1.8. # Example usage: kubectl create -f <this_file> # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: serviceAccountName: kubernetes-dashboard containers: - name: kubernetes-dashboard image: siriuszg/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 9090 protocol: TCP args: #- --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTP path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 9090 targetPort: 9090 selector: k8s-app: kubernetes-dashboard # ------------------------------------------------------------ kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-external namespace: kube-system spec: ports: - port: 9090 targetPort: 9090 nodePort: 30090 type: NodePort selector: k8s-app: kubernetes-dashboard EOF
下载镜像,这个镜像是在dockerhub上的,pull的速度就看网速给不给力了。
#拉取dashboard镜像 https://hub.docker.com/r/siriuszg/kubernetes-dashboard-amd64/ docker pull siriuszg/kubernetes-dashboard-amd64:v1.8.3
修改完就可以创建一个pod了
kubectl create -f kubernetes-dashboard.yaml
运行成功就能看到kubernetes-dashboard在running了,如果不成功请检查pod状态
[root@k8s-master kubernetes-dashboard-master]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7997f8864c-92xtj 1/1 Running 0 4h kube-system coredns-7997f8864c-cwvpl 1/1 Running 0 4h kube-system kube-apiserver-k8s-master 1/1 Running 0 3h kube-system kube-controller-manager-k8s-master 1/1 Running 0 3h kube-system kube-flannel-ds-2wclt 1/1 Running 0 3h kube-system kube-proxy-mtcns 1/1 Running 0 4h kube-system kube-scheduler-k8s-master 1/1 Running 0 3h kube-system kubernetes-dashboard-6699c65d5f-6k2w4 1/1 Running 0 3h
通过get svc 查看dashboard的端口。
[root@k8s-master01 kubernetes-dashboard-master]# kubectl -n kube-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d kubernetes-dashboard-external NodePort 10.96.220.208 <none> 9090:30090/TCP 1h
可以直接访问VIP+30090即可。
这个时候应该是会有很多报错的。我第一次访问dashboard的时候有11个报错,网站是可以访问但是什么都看不了
其中第一条报错如下
configmaps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list configmaps in the namespace "default"
google了一下 看到一个网站把问题解决了
https://blog.tekspace.io/kubernetes-dashboard-remote-access/
Create new file and insert following details.
[root@k8s-master01 ~]# vim kube-dashboard-access.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
Now we will apply changes to Kubernetes Cluster to grant access to dashboard.
[root@k8s-master01 ~]# kubectl create -f kube-dashboard-access.yaml clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
这样就能看到正常的dashboard。
kubeadm init --config config.yaml #初始化的结果和master01的结果完全一样 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
刚初始化完毕后,
kubectl get node
能看到集群的信息了,但是
STATUS
是
NotReady
,这是因为镜像还在同步中,所以是
NotReady
,待镜像到位后就是
Ready
的状态了,可以多看看各主机
/var/log/message
的信息
root@k8s-master01 ~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 1h v1.10.2 k8s-master02 NotReady master 2m v1.10.2 k8s-master03 NotReady master 58s v1.10.2
root@k8s-master01 ~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7997f8864c-fgfph 1/1 Running 0 1h kube-system coredns-7997f8864c-ng2p9 1/1 Running 0 1h kube-system kube-apiserver-k8s-master01 1/1 Running 0 1h kube-system kube-apiserver-k8s-master02 1/1 Running 0 6m kube-system kube-apiserver-k8s-master03 1/1 Running 0 4m kube-system kube-controller-manager-k8s-master01 1/1 Running 0 1h kube-system kube-controller-manager-k8s-master02 1/1 Running 0 6m kube-system kube-controller-manager-k8s-master03 1/1 Running 0 4m kube-system kube-flannel-ds-6h4r8 0/1 Init:ImagePullBackOff 0 11m kube-system kube-flannel-ds-sdww9 0/1 Init:ImagePullBackOff 0 9m kube-system kube-flannel-ds-w8xfx 1/1 Running 0 1h kube-system kube-proxy-7nmgz 1/1 Running 0 9m kube-system kube-proxy-nzb5f 1/1 Running 0 11m kube-system kube-proxy-v8f25 1/1 Running 0 1h kube-system kube-scheduler-k8s-master01 1/1 Running 0 1h kube-system kube-scheduler-k8s-master02 1/1 Running 0 6m kube-system kube-scheduler-k8s-master03 1/1 Running 0 4m kube-system kubernetes-dashboard-6699c65d5f-fr8jr 1/1 Running 0 26m
日志概览(下载镜像中)
May 2 05:36:07 k8s-master02 kubelet: W0502 05:36:07.991060 6096 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 2 05:36:07 k8s-master02 kubelet: E0502 05:36:07.991160 6096 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized May 2 05:36:12 k8s-master02 kubelet: W0502 05:36:12.992127 6096 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 2 05:36:12 k8s-master02 kubelet: E0502 05:36:12.992242 6096 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized May 2 05:36:17 k8s-master02 kubelet: I0502 05:36:17.773474 6096 kube\_docker\_client.go:345] Pulling image "quay.io/coreos/flannel:v0.10.0-amd64": "856cbd0b7b9c: Downloading ================================================> 9.506MB/9.821MB"
也可以用systemctl来查看状态,都是在拉镜像中,具体速度就要看网速给不给力了。
root@k8s-master02 ~# systemctl status kubelet -l ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2018-05-02 05:26:59 EDT; 22min ago Docs: [http://kubernetes.io/docs/](http://kubernetes.io/docs/) Main PID: 6096 (kubelet) Memory: 85.6M CGroup: /system.slice/kubelet.service └─6096 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0 May 02 05:49:03 k8s-master02 kubelet6096: W0502 05:49:03.611294 6096 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 02 05:49:03 k8s-master02 kubelet6096: E0502 05:49:03.611405 6096 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized May 02 05:49:04 k8s-master02 kubelet6096: I0502 05:49:04.624645 6096 kube\_docker\_client.go:345] Pulling image "quay.io/coreos/flannel:v0.10.0-amd64": "8a8433d1d437: Downloading ==========================================> 1.294MB/1.533MB" May 02 05:49:08 k8s-master02 kubelet6096: W0502 05:49:08.612206 6096 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 02 05:49:08 k8s-master02 kubelet6096: E0502 05:49:08.612300 6096 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized May 02 05:49:13 k8s-master02 kubelet6096: W0502 05:49:13.613434 6096 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 02 05:49:13 k8s-master02 kubelet6096: E0502 05:49:13.613531 6096 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized May 02 05:49:14 k8s-master02 kubelet6096: I0502 05:49:14.624208 6096 kube\_docker\_client.go:345] Pulling image "quay.io/coreos/flannel:v0.10.0-amd64": "8a8433d1d437: Downloading ===========================================> 1.327MB/1.533MB" May 02 05:49:18 k8s-master02 kubelet6096: W0502 05:49:18.614517 6096 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 02 05:49:18 k8s-master02 kubelet6096: E0502 05:49:18.614610 6096 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
等待镜像到位后,
root@k8s-master01 ~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 17h v1.10.2 k8s-master02 Ready master 16h v1.10.2 k8s-master03 Ready master 16h v1.10.2
root@k8s-master01 ~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7997f8864c-fgfph 1/1 Running 0 17h kube-system coredns-7997f8864c-ng2p9 1/1 Running 0 17h kube-system kube-apiserver-k8s-master01 1/1 Running 0 17h kube-system kube-apiserver-k8s-master02 1/1 Running 0 16h kube-system kube-apiserver-k8s-master03 1/1 Running 0 16h kube-system kube-controller-manager-k8s-master01 1/1 Running 0 17h kube-system kube-controller-manager-k8s-master02 1/1 Running 0 16h kube-system kube-controller-manager-k8s-master03 1/1 Running 0 16h kube-system kube-flannel-ds-6h4r8 1/1 Running 0 16h kube-system kube-flannel-ds-fqk8g 1/1 Running 0 15h kube-system kube-flannel-ds-sdww9 1/1 Running 0 16h kube-system kube-flannel-ds-w8xfx 1/1 Running 0 17h kube-system kube-proxy-7nmgz 1/1 Running 0 16h kube-system kube-proxy-nzb5f 1/1 Running 0 16h kube-system kube-proxy-v8f25 1/1 Running 0 17h kube-system kube-proxy-xlh49 1/1 Running 0 15h kube-system kube-scheduler-k8s-master01 1/1 Running 0 17h kube-system kube-scheduler-k8s-master02 1/1 Running 0 16h kube-system kube-scheduler-k8s-master03 1/1 Running 0 16h kube-system kubernetes-dashboard-6699c65d5f-fr8jr 1/1 Running 0 16h
在k8s-node01上执行以下命令(就是集群一开始初始化出来的命令)
kubeadm join 192.168.1.100:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hashsha256:6ed0d577845d09281b7ff098369d9f88ea4bfc094305893f53fccac3bab01eca
然后也是等待一系列的镜像加载,没事多看看日志。
加载完毕后查看node
root@k8s-master01 ~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 17h v1.10.2 k8s-master02 Ready master 16h v1.10.2 k8s-master03 Ready master 16h v1.10.2 k8s-node01 Ready <none> 15h v1.10.2
下载github
kubernetes-heapster
上最新的版本并解压
wget https://github.com/kubernetes/heapster/archive/master.zip unzip master.zip
heapster要用的目录分别是
/heapster-master/deploy/kube-config/influxdb
和
/heapster-master/deploy/kube-config/rbac
cd /heapster-master/deploy/kube-config google influxdb rbac standalone standalone-test standalone-with-apiserver # cd influxdb/ grafana.yaml heapster.yaml influxdb.yaml # cd rbac/ heapster-rbac.yaml
苦逼拉镜像
官方源文件的镜像都放在谷歌了
image: k8s.gcr.io/heapster-amd64:v1.5.3
image: k8s.gcr.io/heapster-grafana-amd64:v4.4.3
image: k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
这里替换成阿里云的,并且修改
grafana.yaml``heapster.yaml``influxdb.yaml
这三个文件的
image
docker pull registry.cn-shenzhen.aliyuncs.com/rancher\_cn/heapster-amd64:v1.5.1 docker pull registry.cn-hangzhou.aliyuncs.com/kube\_containers/heapster\_influxdb:v1.3.3 docker pull registry.cn-shenzhen.aliyuncs.com/rancher\_cn/heapster-grafana-amd64:v4.4.3
修改完
grafana.yaml
heapster.yaml
influxdb.yaml
文件中的
image
为本地镜像后开始创建
#先创建rbac root@k8s-master01 kube-config# kubectl create -f rbac/.
创建
grafana.yaml``heapster.yaml``influxdb.yaml
root@k8s-master01 kube-config# kubectl create -f influxdb/.
到此heapster已成功,能看到图表(采集数据需要一定的时间,耐心等等只要相关的pod没有报错就没有问题)。
把master01关机
查看master02 vip是否绑定上(keepalived的vip自动漂)
root@k8s-master02 ~# ip addr 2: ens33: <BROADCAST,MULTICAST,UP,LOWER\_UP> mtu 1500 qdisc pfifo\_fast state UP qlen 1000 link/ether 00:0c:29:a2:00:f4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.22/24 brd 192.168.1.255 scope global ens33 valid\_lft forever preferred\_lft forever inet 192.168.1.100/24 scope global secondary ens33 valid\_lft forever preferred\_lft forever inet6 fe80::20c:29ff:fea2:f4/64 scope link valid\_lft forever preferred\_lft forever
查看集群状态
root@k8s-master03 ~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 19h v1.10.2 k8s-master02 Ready master 18h v1.10.2 k8s-master03 Ready master 18h v1.10.2 k8s-node01 Ready <none> 17h v1.10.2
在master03上创建一个简单的pod
cat >> myapp-pod.yml << EOF apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
无邪的大熊猫 · springboot k8s - CSDN文库 1 月前 |