Summer's Blog
😈酷炫主页
✨运维
🎉安装
👀踩坑
  • k8s
  • shell
  • python
  • redis
  • elasticsearch
  • mysql
  • ceph
  • spark
  • 关于
  • 思维
  • 命令
  • 友链
  • 分类
  • 标签
  • 归档
👨‍👩‍👦‍👦腾讯云社区
🗣GitHub

Summer———夏苏文

💨运维界的前行者
😈酷炫主页
✨运维
🎉安装
👀踩坑
  • k8s
  • shell
  • python
  • redis
  • elasticsearch
  • mysql
  • ceph
  • spark
  • 关于
  • 思维
  • 命令
  • 友链
  • 分类
  • 标签
  • 归档
👨‍👩‍👦‍👦腾讯云社区
🗣GitHub
  • K8s

  • shell

  • spark

  • Python

    • Python实现ssh远程执行
    • Python搭建http共享文件
    • Python合并多个excl表格
    • Python提取log参数生成图表
      • Python算法-冒泡排序
      • Python算法-查找
      • Python算法-汉诺塔
      • Python通过librbd操作ceph
    • Redis

    • ceph

    • Elasticsearch

    • Mysql

    • 学习
    • Python
    summer
    2021-05-12

    Python提取log参数生成图表

    目标:根据ceph集群的mon日志,以其中的时间为横坐标,对象迁移速度为纵坐标,利用python的matplotlib生成图表

    # 日志格式

    2021-04-22 14:00:20.687685 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31413: 3578 pgs: 1795 active+clean, 1518 active+recovery_wait+degraded, 265 active+recovering+degraded; 13709 GB data, 37462 GB used, 216 TB / 253 TB avail; 25963 B/s rd, 101 MB/s wr, 40 op/s; 3635096/10530407 objects degraded (34.520%); 231 MB/s, 57 objects/s recovering
    2021-04-22 14:00:21.404638 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1226651' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
    2021-04-22 14:00:22.064304 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31414: 3578 pgs: 1795 active+clean, 1518 active+recovery_wait+degraded, 265 active+recovering+degraded; 13710 GB data, 37465 GB used, 216 TB / 253 TB avail; 202 MB/s wr, 66 op/s; 3634943/10530710 objects degraded (34.518%); 491 MB/s, 122 objects/s recovering
    2021-04-22 14:00:22.899451 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1226758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
    2021-04-22 14:00:23.117601 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31415: 3578 pgs: 1795 active+clean, 1518 active+recovery_wait+degraded, 265 active+recovering+degraded; 13710 GB data, 37466 GB used, 216 TB / 253 TB avail; 278 MB/s wr, 89 op/s; 3634877/10530920 objects degraded (34.516%); 656 MB/s, 164 objects/s recovering
    2021-04-22 14:00:24.168534 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31416: 3578 pgs: 1795 active+clean, 1518 active+recovery_wait+degraded, 265 active+recovering+degraded; 13710 GB data, 37467 GB used, 216 TB / 253 TB avail; 46501 B/s rd, 156 MB/s wr, 54 op/s; 3634844/10530971 objects degraded (34.516%); 351 MB/s, 87 objects/s recovering
    2021-04-22 14:00:24.310858 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1226862' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
    2021-04-22 14:00:25.182928 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31417: 3578 pgs: 1796 active+clean, 1518 active+recovery_wait+degraded, 264 active+recovering+degraded; 13710 GB data, 37467 GB used, 216 TB / 253 TB avail; 50826 B/s rd, 73402 kB/s wr, 28 op/s; 3633653/10531025 objects degraded (34.504%); 272 MB/s, 68 objects/s recovering
    2021-04-22 14:00:25.706503 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227016' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
    2021-04-22 14:00:26.211983 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31418: 3578 pgs: 1796 active+clean, 1518 active+recovery_wait+degraded, 264 active+recovering+degraded; 13710 GB data, 37468 GB used, 216 TB / 253 TB avail; 95139 kB/s wr, 30 op/s; 3633573/10531100 objects degraded (34.503%); 381 MB/s, 95 objects/s recovering
    2021-04-22 14:00:27.264925 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31419: 3578 pgs: 1796 active+clean, 1518 active+recovery_wait+degraded, 264 active+recovering+degraded; 13711 GB data, 37471 GB used, 216 TB / 253 TB avail; 326 MB/s wr, 103 op/s; 3633320/10531484 objects degraded (34.500%); 947 MB/s, 236 objects/s recovering
    2021-04-22 14:00:27.524231 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227171' entity='client.admin' cmd=[{"prefix": "osd perf"}]: dispatch
    2021-04-22 14:00:27.635542 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227155' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
    2021-04-22 14:00:27.957661 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.2:0/4594454' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
    2021-04-22 14:00:28.162358 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.2:0/4594460' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
    2021-04-22 14:00:28.292117 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31420: 3578 pgs: 1797 active+clean, 1518 active+recovery_wait+degraded, 263 active+recovering+degraded; 13711 GB data, 37471 GB used, 216 TB / 253 TB avail; 320 MB/s wr, 101 op/s; 3632295/10531565 objects degraded (34.490%); 803 MB/s, 200 objects/s recovering
    2021-04-22 14:00:29.490974 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31421: 3578 pgs: 1797 active+clean, 1518 active+recovery_wait+degraded, 263 active+recovering+degraded; 13711 GB data, 37472 GB used, 216 TB / 253 TB avail; 58531 B/s rd, 107 MB/s wr, 50 op/s; 3632272/10531658 objects degraded (34.489%); 165 MB/s, 41 objects/s recovering
    2021-04-22 14:00:30.560615 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31422: 3578 pgs: 1797 active+clean, 1517 active+recovery_wait+degraded, 264 active+recovering+degraded; 13711 GB data, 37472 GB used, 216 TB / 253 TB avail; 57007 B/s rd, 86775 kB/s wr, 42 op/s; 3632246/10531712 objects degraded (34.489%); 166 MB/s, 41 objects/s recovering
    2021-04-22 14:00:30.935797 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227362' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
    2021-04-22 14:00:31.620008 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227476' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
    2021-04-22 14:00:31.627203 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227460' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
    2021-04-22 14:00:31.671392 7f97234a6700  0 log_channel(cluster) log [INF] : pgmap v31423: 3578 pgs: 1799 active+clean, 1517 active+recovery_wait+degraded, 262 active+recovering+degraded; 13711 GB data, 37474 GB used, 216 TB / 253 TB avail; 138 MB/s wr, 42 op/s; 3629994/10531892 objects degraded (34.467%); 337 MB/s, 84 objects/s recovering
    2021-04-22 14:00:32.125331 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.3:0/4429248' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
    2021-04-22 14:00:32.229556 7f971d7fe700  0 log_channel(audit) log [INF] : from='client.? 10.10.10.1:0/1227374' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24

    # python代码实现

    import pylab
    import matplotlib.pyplot as plt
    from matplotlib.pyplot import MultipleLocator
    
    def get_number():
        y1 = []
        x1 = []
        Syslog = "C:/Users/summer/Desktop/ceph-mon.log"
        with open(Syslog,'r')as f:
            for i in f.readlines():
                try:
                    if ';' not in i or 'kB/s' in i:
                        continue
                    else:
                        info = i.split(';')
                        x1.append(info[0][11:19]) 
                        y1.append(float(i[i.rfind("MB/s"):][5:9])) 
                        
                        # print(x1 + y1)
    
                except:
                    print('erro')
    
        
            pylab.figure(figsize=(21, 7))
            pylab.plot(x1,y1,'r')
            plt.xticks(rotation=90,fontsize=11) 
            x_major_locator=MultipleLocator(50)
            ax=plt.gca()
            ax.xaxis.set_major_locator(x_major_locator)
            # plt.tight_layout()
            pylab.show()
    
    
    if __name__ == '__main__':
        get_number()
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36

    # 生成的图表展示

    #python
    上次更新: 5/12/2021, 6:35:08 PM
    Python合并多个excl表格
    Python算法-冒泡排序

    ← Python合并多个excl表格 Python算法-冒泡排序→

    最近更新
    01
    centos8安装部署ovirt-engine
    11-21
    02
    安装Acunetix
    11-02
    03
    三大漏洞扫描工具报告获取
    05-24
    更多文章>
    Theme by Vdoing | Copyright © 2019-2022 夏苏文 | MIT License

    网站已在灾难中运行:

    蜀ICP备2022029853号-1
    • 跟随系统
    • 浅色模式
    • 深色模式
    • 阅读模式