当前位置: 首页> 健康> 母婴 > 策划公司怎么找客户_武汉建设局官网_域名注册多少钱_阿里指数官方网站

策划公司怎么找客户_武汉建设局官网_域名注册多少钱_阿里指数官方网站

时间:2025/8/9 6:55:29来源:https://blog.csdn.net/weixin_48502062/article/details/144549916 浏览次数:0次
策划公司怎么找客户_武汉建设局官网_域名注册多少钱_阿里指数官方网站

本节重点总结 :

  • prometheus 采集分析

prometheus 采集分析

serviceMonitor/monitoring/kube-state-metrics/0 代表采集ksm 资源指标

  • image.png
  • 带上target显示的标签过来 查询 {job=“kube-state-metrics”,container=“kube-rbac-proxy-main”}
  • image.png
  • 全量yaml如下
- job_name: serviceMonitor/monitoring/kube-state-metrics/0honor_labels: truehonor_timestamps: truescrape_interval: 30sscrape_timeout: 30smetrics_path: /metricsscheme: httpsauthorization:type: Bearercredentials_file: /var/run/secrets/kubernetes.io/serviceaccount/tokentls_config:insecure_skip_verify: truefollow_redirects: truerelabel_configs:- source_labels: [job]separator: ;regex: (.*)target_label: __tmp_prometheus_job_namereplacement: $1action: replace- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_component]separator: ;regex: exporterreplacement: $1action: keep- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]separator: ;regex: kube-state-metricsreplacement: $1action: keep- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_part_of]separator: ;regex: kube-prometheusreplacement: $1action: keep- source_labels: [__meta_kubernetes_endpoint_port_name]separator: ;regex: https-mainreplacement: $1action: keep- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]separator: ;regex: Node;(.*)target_label: nodereplacement: ${1}action: replace- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]separator: ;regex: Pod;(.*)target_label: podreplacement: ${1}action: replace- source_labels: [__meta_kubernetes_namespace]separator: ;regex: (.*)target_label: namespacereplacement: $1action: replace- source_labels: [__meta_kubernetes_service_name]separator: ;regex: (.*)target_label: servicereplacement: $1action: replace- source_labels: [__meta_kubernetes_pod_name]separator: ;regex: (.*)target_label: podreplacement: $1action: replace- source_labels: [__meta_kubernetes_pod_container_name]separator: ;regex: (.*)target_label: containerreplacement: $1action: replace- source_labels: [__meta_kubernetes_service_name]separator: ;regex: (.*)target_label: jobreplacement: ${1}action: replace- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]separator: ;regex: (.+)target_label: jobreplacement: ${1}action: replace- separator: ;regex: (.*)target_label: endpointreplacement: https-mainaction: replace- separator: ;regex: (pod|service|endpoint|namespace)replacement: $1action: labeldrop- source_labels: [__address__]separator: ;regex: (.*)modulus: 1target_label: __tmp_hashreplacement: $1action: hashmod- source_labels: [__tmp_hash]separator: ;regex: "0"replacement: $1action: keepkubernetes_sd_configs:- role: endpointsfollow_redirects: truenamespaces:names:- monitoring

首先采用的k8s的endpoint的sd

  • 指定namespace为 monitoring
  kubernetes_sd_configs:- role: endpointsfollow_redirects: truenamespaces:names:- monitoring

monitoring下的endpoint查看

[root@prome-master01 kube-prometheus]# kubectl get endpoints -n monitoring
NAME                    ENDPOINTS                                                           AGE
alertmanager-main       10.100.71.17:9093,10.100.71.50:9093,10.100.71.60:9093               20m
alertmanager-operated   10.100.71.17:9094,10.100.71.50:9094,10.100.71.60:9094 + 6 more...   20m
blackbox-exporter       10.100.71.48:9115,10.100.71.48:19115                                20m
grafana                 10.100.71.51:3000                                                   20m
kube-state-metrics      10.100.71.52:8443,10.100.71.52:9443                                 20m
node-exporter           192.168.3.200:9100,192.168.3.201:9100                               20m
prometheus-adapter      10.100.71.53:6443,10.100.71.54:6443                                 20m
prometheus-k8s          10.100.71.18:9090,10.100.71.58:9090                                 20m
prometheus-operated     10.100.71.18:9090,10.100.71.58:9090                                 20m
prometheus-operator     10.100.71.42:8443                                                   137m

下面这4个relabel代表过滤 kube-state-metrics的endpoint

  • yaml如下
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_component]separator: ;regex: exporterreplacement: $1action: keep- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]separator: ;regex: kube-state-metricsreplacement: $1action: keep- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_part_of]separator: ;regex: kube-prometheusreplacement: $1action: keep- source_labels: [__meta_kubernetes_endpoint_port_name]separator: ;regex: https-mainreplacement: $1action: keep
  • kubectl describe endpoints kube-state-metrics -n monitoring
  • 这里的几个label和上面的relabel刚好匹配中,意思就是过滤 monitoring namespace下的kube-state-metrics的endpoint
  • 同时这个job只采集 portname=https-main 也就是8443端口的指标
kubectl describe endpoints kube-state-metrics  -n monitoring 
Name:         kube-state-metrics
Namespace:    monitoring
Labels:       app.kubernetes.io/component=exporterapp.kubernetes.io/name=kube-state-metricsapp.kubernetes.io/part-of=kube-prometheusapp.kubernetes.io/version=2.0.0service.kubernetes.io/headless=
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2021-09-06T04:36:42Z
Subsets:Addresses:          10.100.85.238NotReadyAddresses:  <none>Ports:Name        Port  Protocol----        ----  --------https-main  8443  TCPhttps-self  9443  TCP
kube-stats-metrics 8443端口是 ksm主服务的端口
  • yaml地址 manifests\kube-state-metrics-deployment.yaml
  • ksm服务listen 127.0.0.1 8081端口
      containers:- args:- --host=127.0.0.1- --port=8081- --telemetry-host=127.0.0.1- --telemetry-port=8082image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.0.0name: kube-state-metricsresources:limits:cpu: 100mmemory: 250Mirequests:cpu: 10mmemory: 190MisecurityContext:runAsUser: 65534
  • kube-rbac-proxy 代理服务监听8443端口,代理来自127.0.0.1:8081的请求
      - args:- --logtostderr- --secure-listen-address=:8443- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305- --upstream=http://127.0.0.1:8081/image: quay.io/brancz/kube-rbac-proxy:v0.8.0name: kube-rbac-proxy-mainports:- containerPort: 8443name: https-mainresources:limits:cpu: 40mmemory: 40Mirequests:cpu: 20mmemory: 20MisecurityContext:runAsGroup: 65532runAsNonRoot: truerunAsUser: 65532
  • kube-rbac-proxy 的作用就是将之前ksm暴露的指标 保护起来
  • 因为在攻击者可能获得对 Pod 的完全控制的场景中,该攻击者将能够发现有关工作负载以及相应工作负载的当前负载的大量信息。
  • 所以加了一层代理,只能通过代理来访问具体的指标。这样只有部署了kube-rbac-proxy sidecar的容器才能访问

同时去掉 pod|service|endpoint|namespace 标签

  • 最终的sd结果标签中只保留三个
    • container
    • instance
    • job
  • yaml配置如下
  - separator: ;regex: (pod|service|endpoint|namespace)replacement: $1action: labeldrop

同时做了hashmod ,猜测为了扩容准备的

  - source_labels: [__address__]separator: ;regex: (.*)modulus: 1target_label: __tmp_hashreplacement: $1action: hashmod- source_labels: [__tmp_hash]separator: ;regex: "0"replacement: $1action: keep

修改ksm的副本数

  • vim manifests/kube-state-metrics-deployment.yaml
  • replicas由1改为2
  • 可以在target页面看到ksm相关的两个job endpoint数量改为2了
  • image.png
  • 查询数据可以看到采集已经有两个了,通过instance标签区分
  • image.png

serviceMonitor/monitoring/kube-state-metrics/1 代表ksm自身的指标

  • 指标查询
{endpoint="https-self", job="kube-state-metrics", namespace="monitoring"}
  • 所有的配置和0一致
  • 只是port_name由 https-main改为了https-self,即9443端口
  - source_labels: [__meta_kubernetes_endpoint_port_name]separator: ;regex: https-selfreplacement: $1action: keep

对应ksm容器端口 8082

  • 位置 manifests\kube-state-metrics-deployment.yaml
  • ksm的 telemetry-port=8082代表将自身指标暴露在 8082端口上
  • target页面image.png
  • 查询 {job=“kube-state-metrics”,container=“kube-rbac-proxy-self”}
  • image.png
      containers:- args:- --host=127.0.0.1- --port=8081- --telemetry-host=127.0.0.1- --telemetry-port=8082image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.0.0name: kube-state-metricsresources:limits:cpu: 100mmemory: 250Mirequests:cpu: 10mmemory: 190MisecurityContext:runAsUser: 65534
  • kube-rbac-proxy 9443代理8082端口流量
      - args:- --logtostderr- --secure-listen-address=:9443- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305- --upstream=http://127.0.0.1:8082/image: quay.io/brancz/kube-rbac-proxy:v0.8.0name: kube-rbac-proxy-selfports:- containerPort: 9443name: https-selfresources:limits:cpu: 20mmemory: 40Mirequests:cpu: 10mmemory: 20MisecurityContext:runAsGroup: 65532runAsNonRoot: truerunAsUser: 65532

serviceMonitor/monitoring/node-exporter/0

  • 使用endpoints的k8s_sd ,namespace为 monitoring
  kubernetes_sd_configs:- role: endpointsfollow_redirects: truenamespaces:names:- monitoring
  • 过滤 node-exporter endpoints
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]separator: ;regex: node-exporterreplacement: $1action: keep
  • kubectl describe endpoints node-exporter -n monitoring
[root@k8s-master01 kube-prometheus]# kubectl describe endpoints node-exporter   -n monitoring              
Name:         node-exporter
Namespace:    monitoring
Labels:       app.kubernetes.io/component=exporterapp.kubernetes.io/name=node-exporterapp.kubernetes.io/part-of=kube-prometheusapp.kubernetes.io/version=1.1.2service.kubernetes.io/headless=
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2021-09-06T04:36:40Z
Subsets:Addresses:          172.20.70.205,172.20.70.215NotReadyAddresses:  <none>Ports:Name   Port  Protocol----   ----  --------https  9100  TCPEvents:  <none>
  • 直接访问https的node-exporter报错
  • image.png
  • 可以通过127.0.0.1:9100访问 http的
  • [root@prome-master01 kube-prometheus]# curl localhost:9100
    <html><head><title>Node Exporter</title></head><body><h1>Node Exporter</h1><p><a href="/metrics">Metrics</a></p></body></html>[root@prome-master01 
    

kube-rbac-proxy 通过9100代理node-exporter

  • manifests\node-exporter-daemonset.yaml
  • node-exporter改为listen 127.0.0.1:9100,只能在node上访问自己
  • 外面想要访问必须要通过 kube-rbac-proxy
  • yaml配置
      - args:- --web.listen-address=127.0.0.1:9100- --path.sysfs=/host/sys- --path.rootfs=/host/root- --no-collector.wifi- --no-collector.hwmon- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)- --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$- --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$image: quay.io/prometheus/node-exporter:v1.1.2name: node-exporterresources:limits:cpu: 250mmemory: 180Mirequests:cpu: 102mmemory: 180MivolumeMounts:- mountPath: /host/sysmountPropagation: HostToContainername: sysreadOnly: true- mountPath: /host/rootmountPropagation: HostToContainername: rootreadOnly: true- args:- --logtostderr- --secure-listen-address=[$(IP)]:9100- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305- --upstream=http://127.0.0.1:9100/env:- name: IPvalueFrom:fieldRef:fieldPath: status.podIPimage: quay.io/brancz/kube-rbac-proxy:v0.8.0name: kube-rbac-proxyports:- containerPort: 9100hostPort: 9100name: httpsresources:limits:cpu: 20mmemory: 40Mirequests:cpu: 10mmemory: 20MisecurityContext:runAsGroup: 65532runAsNonRoot: truerunAsUser: 65532hostNetwork: truehostPID: truenodeSelector:kubernetes.io/os: linuxsecurityContext:runAsNonRoot: truerunAsUser: 65534serviceAccountName: node-exportertolerations:- operator: Existsvolumes:- hostPath:path: /sysname: sys- hostPath:path: /name: root

serviceMonitor/monitoring/kube-apiserver/0

  • 使用endpoints的k8s_sd ,namespace为 default
  • 原因是集群默认在default ns中创建 kubernetes 的 svc 和endpoints
  • [root@prome-master01 kube-prometheus]# kubectl get endpoints
    NAME                ENDPOINTS            AGE
    grafana-node-port   10.100.71.41:3000    20d
    kubernetes          192.168.3.200:6443   20d
    [root@prome-master01 kube-prometheus]# kubectl get svc
    NAME                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
    grafana-node-port   NodePort    10.96.132.2   <none>        80:30000/TCP   20d
    kubernetes          ClusterIP   10.96.0.1     <none>        443/TCP        20d
  kubernetes_sd_configs:- role: endpointsfollow_redirects: truenamespaces:names:- default

过滤endpoint

  - source_labels: [__meta_kubernetes_service_label_component]separator: ;regex: apiserverreplacement: $1action: keep- source_labels: [__meta_kubernetes_service_label_provider]separator: ;regex: kubernetesreplacement: $1action: keep- source_labels: [__meta_kubernetes_endpoint_port_name]separator: ;regex: httpsreplacement: $1action: keep

同时使用 metric_relabel_configs drop掉了大量的无用指标

 metric_relabel_configs:regex: apiserver_admission_controller_admission_latencies_seconds_.*replacement: $1action: drop- source_labels: [__name__]separator: ;regex: apiserver_admission_step_admission_latencies_seconds_.*replacement: $1action: drop- source_labels: [__name__, le]separator: ;regex: apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)replacement: $1action: drop

从 kubelet上采集 serviceMonitor/monitoring/kubelet

通用的配置

  • kube-system命名空间下的endpoints
  kubernetes_sd_configs:- role: endpointsfollow_redirects: truenamespaces:names:- kube-system
  • 过滤kubelet endpoints
  • kubectl describe endpoints kubelet -n kube-system
[root@k8s-master01 kube-prometheus]# kubectl describe endpoints kubelet -n kube-system
Name:         kubelet
Namespace:    kube-system
Labels:       app.kubernetes.io/managed-by=prometheus-operatorapp.kubernetes.io/name=kubeletk8s-app=kubelet
Annotations:  <none>
Subsets:Addresses:          172.20.70.205,172.20.70.215NotReadyAddresses:  <none>Ports:Name           Port   Protocol----           ----   --------https-metrics  10250  TCPhttp-metrics   10255  TCPcadvisor       4194   TCPEvents:  <none>
  • 过滤port_name为https-metrics也就是 10250端口
  - source_labels: [__meta_kubernetes_endpoint_port_name]separator: ;regex: https-metricsreplacement: $1action: keep

serviceMonitor/monitoring/kubelet/0 代表采集kubelet自身指标

  • 对应的为 https://172.20.70.205:10250/metrics
metrics_path: /metrics

serviceMonitor/monitoring/kubelet/1 代表采集kubelet内置的cadvisor指标也就是容器指标

  • 对应的为 https://172.20.70.205:10250/metrics/cadvisor
 metrics_path: /metrics/cadvisor

serviceMonitor/monitoring/kubelet/2 代表采集kubelet对容器的Liveness Readiness探测的结果

  • 对应的为 https://172.20.70.205:10250/metrics/probes
metrics_path: /metrics/probes
  • 容器探活的 Liveness Readiness 指标

prober_probe_total{container="prometheus", endpoint="https-metrics", instance="172.20.70.215:10250", job="kubelet", metrics_path="/metrics/probes", namespace="kube-system", node="k8s-node01", pod="prometheus-0", pod_uid="e27c9fe7-9d82-4228-86fb-b9c920611c15", probe_type="Liveness", result="successful", service="kubelet"}
148299
prober_probe_total{container="prometheus", endpoint="https-metrics", instance="172.20.70.215:10250", job="kubelet", metrics_path="/metrics/probes", namespace="kube-system", node="k8s-node01", pod="prometheus-0", pod_uid="e27c9fe7-9d82-4228-86fb-b9c920611c15", probe_type="Readiness", result="successful", service="kubelet"}
148300
prober_probe_total{container="prometheus", endpoint="https-metrics", instance="172.20.70.215:10250", job="kubelet", metrics_path="/metrics/probes", namespace="monitoring", node="k8s-node01", pod="prometheus-k8s-0", pod_uid="8898c8f2-1ea7-412f-8a25-ce98a8ca47c2", probe_type="Readiness", result="successful", service="kubelet"}
3084
prober_probe_total{container="prometheus", endpoint="https-metrics", instance="172.20.70.215:10250", job="kubelet", metrics_path="/metrics/probes", namespace="monitoring", node="k8s-node01", pod="prometheus-k8s-1", pod_uid="937e07bc-5cea-4e3d-83ac-a2e68e072340", probe_type="Readiness", result="successful", service="kubelet"}
3083

serviceMonitor/monitoring/prometheus-operator/0 代表prometheus-operator的指标

  • 过滤monitoring的 prometheus-operator
  kubernetes_sd_configs:- role: endpointsfollow_redirects: truenamespaces:names:- monitoring

prometheus-operator的作用

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

  • 根据配置查询prometheus中的指标,作为用户自定义HPA的依据

  • kube-aggregator 允许开发人员编写一个自己的服务,把这个服务注册到 Kubernetes 的 APIServer 里面去,这样我们就可以像原生的 APIServer 提供的 API 使用自己的 API 了,我们把自己的服务运行在 Kubernetes 集群里面,然后 Kubernetes 的 Aggregator 通过 Service 名称就可以转发到我们自己写的 Service 里面去了。这样这个聚合层就带来了很多好处:

    • 增加了 API 的扩展性,开发人员可以编写自己的 API 服务来暴露他们想要的 API。
    • 丰富了 API,核心 kubernetes 团队阻止了很多新的 API 提案,通过允许开发人员将他们的 API 作为单独的服务公开,这样就无须社区繁杂的审查了。
    • 开发分阶段实验性 API,新的 API 可以在单独的聚合服务中开发,当它稳定之后,在合并会 APIServer 就很容易了。
    • 确保新 API 遵循 Kubernetes 约定,如果没有这里提出的机制,社区成员可能会被迫推出自己的东西,这样很可能造成社区成员和社区约定不一致。
  • 除了基于 CPU 和内存来进行自动扩缩容之外,我们还可以根据自定义的监控指标来进行

  • 这个我们就需要使用 Prometheus Adapter,Prometheus 用于监控应用的负载和集群本身的各种指标

  • Prometheus Adapter 可以帮我们使用 Prometheus 收集的指标并使用它们来制定扩展策略

  • 这些指标都是通过 APIServer 暴露的,而且 HPA 资源对象也可以很轻易的直接使用。

对应的配置文件 manifests\prometheus-adapter-configMap.yaml

apiVersion: v1
data:config.yaml: |-"resourceRules":"cpu":"containerLabel": "container""containerQuery": "sum(irate(container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!=\"\",pod!=\"\"}[5m])) by (<<.GroupBy>>)""nodeQuery": "sum(1 - irate(node_cpu_seconds_total{mode=\"idle\"}[5m]) * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:{<<.LabelMatchers>>}) by (<<.GroupBy>>) or sum (1- irate(windows_cpu_time_total{mode=\"idle\", job=\"windows-exporter\",<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)""resources":"overrides":"namespace":"resource": "namespace""node":"resource": "node""pod":"resource": "pod""memory":"containerLabel": "container""containerQuery": "sum(container_memory_working_set_bytes{<<.LabelMatchers>>,container!=\"\",pod!=\"\"}) by (<<.GroupBy>>)""nodeQuery": "sum(node_memory_MemTotal_bytes{job=\"node-exporter\",<<.LabelMatchers>>} - node_memory_MemAvailable_bytes{job=\"node-exporter\",<<.LabelMatchers>>}) by (<<.GroupBy>>) or sum(windows_cs_physical_memory_bytes{job=\"windows-exporter\",<<.LabelMatchers>>} - windows_memory_available_bytes{job=\"windows-exporter\",<<.LabelMatchers>>}) by (<<.GroupBy>>)""resources":"overrides":"instance":"resource": "node""namespace":"resource": "namespace""pod":"resource": "pod""window": "5m"
kind: ConfigMap
metadata:labels:app.kubernetes.io/component: metrics-adapterapp.kubernetes.io/name: prometheus-adapterapp.kubernetes.io/part-of: kube-prometheusapp.kubernetes.io/version: 0.8.4name: adapter-confignamespace: monitoring

其余自身指标

  • serviceMonitor/monitoring/prometheus-k8s/0 代表两个prometheus采集器的指标
  • serviceMonitor/monitoring/prometheus-operator/0 代表 prometheus-operator的指标
  • serviceMonitor/monitoring/alertmanager/0 三个alertmanager的指标
  • serviceMonitor/monitoring/grafana/0 1个grafana的指标

本节重点总结 :

  • prometheus 采集分析
关键字:策划公司怎么找客户_武汉建设局官网_域名注册多少钱_阿里指数官方网站

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: