lion

lion 查看完整档案

青岛编辑  |  填写毕业院校  |  填写所在公司/组织填写个人主网站
编辑
_ | |__ _ _ __ _ | '_ \| | | |/ _` | | |_) | |_| | (_| | |_.__/ \__,_|\__, | |___/ 该用户太懒什么也没留下

个人动态

lion 发布了文章 · 4月10日

艾特铭客,“因为E8,爱上音乐”

品牌溯源

艾特铭客这个耳机品牌大家应该不陌生,艾特铭客早在1985年诞生在声学领域,更是在业内拥有35年产品研发经验与品牌影响力。在2004年公司成立,就一直秉承着要为人类创造最卓越的听觉盛宴,不断创新,关注声音本质的企业定位,经过这么多年的努力耕耘,为用户创造啦不少非常优秀的耳机产品。
image.png
例如下面最有名的金刚系列,此系列产品一举解决啦传统便携式音响有音量小、低音弱、破音三大短板。并诞生了一个名为“悬浮式低频谐振系统”的专利。金刚系列也开创了精钢材质运用在音响产品的先河。
image.png
那么我们今天要评测的是艾特铭客的另外一款真无线蓝牙耳机产品-艾特铭客E8。

拆箱

首先盒子非常精致,方方正正,小巧精致。外包装通体由冷酷、神秘的黑色构成,给人一种神秘高贵的感觉,盒子正中间只印制了一个艾特铭科的LOGO标志。整体给人感觉像个新奇的礼盒一样。促使人产生想立刻一探究竟的冲动。
image.png
包装盒背面贴有一个艾特铭科产品信息说明贴纸,贴纸上信息比较齐全,像型号、蓝牙版本,单个耳机电池容量、充电盒的电池容量,产品生产厂家信息以及使用注意事项标识都一应俱全,这个信息标签是我见过所有蓝牙耳机盒中信息最全的没有之一,从细节处可以感受到厂商做产品严谨的态度和专业性!
image.png
附带几张未拆箱时的包装美图。
image.png
接下来我们就打开外包装,一探里面的神秘“礼物吧”!

拆箱清单

打开外包装,直接映入眼帘的就是一个橡皮擦大小的耳机仓,耳机仓设计圆润可爱,通体也是只由一个颜色构成-黑色,正中央一个艾特铭客的LOGO标志,和外包装内外设计风格保持一致。第一印象精致华丽,高端大气上档次,单从拆箱体验来看很难想象这是一款市场售价只有199元的真无线蓝牙耳机。
image.png
我们将耳机仓从盒子中取出拿在手中把玩几下,这个“橡皮擦”耳机仓,外装材质是磨砂材料,手感俱佳,就是有些不耐灰尘,只要有灰尘粘在上面,就会很明显。经测量,耳机仓长宽高只有4.7*_3.2*_2.1(cm),不论是放到手里还是衣服兜里,都没有任何异物感,我觉得耳机仓一定要以方便携带为核心需求。
image.png
耳机仓的小巧虽然带来了视觉美感和携带的便利性,但也直接导致耳机仓的蓄电能力有限,经测试一个满电耳机仓大概给耳机冲3-4次完整电量,就会空电。不过同样耳机仓的充电速度也很快,大概一个小时左右就可以把耳机仓充满电。
image.png
耳机仓正前方是电量剩余标识的灯组,一共4个LED蓝灯,耳机充电或者耳机仓充电时,蓝灯都会亮起。
image.png
侧面是一个标准的TYPE-C充电口,现在市场上不论是苹果主流产品还是安卓新品都已经把充电口像TYPE-C方向看齐,这个设计还是比较紧贴市场潮流的,像一些市面上的部分蓝牙耳机还保留着micro充电口,这里的确要褒奖几句,方便实用。
image.png
耳机仓下面是放有说明书和TYPE-C充电线的小盒子,小盒子也是通体黑色,不过这个小盒由于设计的和外盒接触太牢固,再取出时,很容易就会把盒子扯坏掉,“包装具备自毁功能”。另外官方还赠送了4个(2对)适配不同大小耳洞朋友的耳塞。 

最后我们来看几张全家福吧!
image.png
image.png

产品体验

下面从实际产品体验角度来展开说明,首先这款耳机采用的是蓝牙5.0传输协议,相比较蓝牙4.2协议的蓝牙耳机产品来讲,不论是传输数据压缩比、传输距离,还是蓝牙连接稳定性上都有质的变化,而且低功耗模式更省电,使得耳机续航时间更长。
image.png
image.png

续航能力

E8耳机自带40mah的电池,充满电的情况下单次续航时间高达6个小时,而且耳机支持充电5分钟,畅听1小时的闪充功能。笔者对续航时间没有充分测试,不过耳机在满电情况下持续开会3个小时左右,就已经开始提示低电量报警的提示音。这个大家可以酌情参考哈!
image.png

音色

接下来说下音色,艾特铭客E8无线蓝牙耳机音质上做到高低均衡,层次和段落感分明。艾特铭客E8在音质方面做了精准调校,为用户带来超保真音质,使音域更加强劲、宽广,刚开始一佩戴,会带来“浑厚”风格的感觉,高频器乐部分显得有点被弱化,但也不会出现破音!耳机三频表现均衡,高音饱满,再大声也不会破音,低音浑厚,下潜有力,具有很强的回弹力,中频人声的解析很到位,层次分明,会有超赞的人声解析力、层次感、亲和有力,毕竟他们的音箱都是使用专利技术来防止破音的,耳机自然也是一脉相承!
image.png

防水防尘

艾特铭客E8耳机一体成型机身采用密闭工业设计,防水等级达到了IP6六级防水防尘级别,能够有效防止汗水和雨水侵蚀。即使运动状态下“挥汗如雨”也不怕,比我以往佩戴的某款无线蓝牙耳机防水效果好很多。
image.png

抗干扰/降噪

艾特铭客E8耳机采用主动降噪方式不仅使耳机带来更高的性价比,而且让更多的用户感受到主动降噪带来的畅快体验。不仅音质细腻,续航时间长,蓝牙连接稳定,佩戴起来也非常舒适!官方还特地配备了一套重低音降噪耳塞。
image.png

单双耳听

艾特铭客E8耳是真正的TWS无线蓝牙耳机,左右耳可以单独操作使用。当耳机从充电仓取出断开充电,可自动开机回连。采用双通道传输技术,蓝牙连接稳定,不断续,随戴随连≈0秒,接近0卡顿,操作使用方便。
image.png

耳机按键操作

采用的按键式的交互方式,可能很多人喜欢最新的触控方式,但是个人还是比较喜欢按键,无论是在产品使用真实感还是减少误触方面都能够给我带来满满的安全感。按键的设置也十分贴心,硅胶的外表使得按键触摸感受十分好,手指按下有明显的触感。
image.png
优点:

  • 外观新颖独特,小巧精致,辨识度高,是一个吸引人的卖点。 
  • 蓝牙连接稳定,低延迟性首屈一指。
  •  整体来讲在同类产品中性价比较高(市场售价199)。

缺点:

  • 连按两次按键可唤起siri,说明书中未提及。
  • 耳机和耳道的贴合度不高,放置不当时容易滑落。 要贴入式放置在耳朵!
  • 耳机续航能力有限。蓄电能力略显乏力。 
  • 按键功能相对较少,没有音量增减功能。要靠手机调节!
查看原文

赞 0 收藏 0 评论 0

lion 关注了用户 · 2019-12-26

图书策划小罗 @tushucehuaxiaoluo

关注 5

lion 发布了文章 · 2019-12-25

监控系统OPEN-FALCON安装

官方文档

https://book.open-falcon.org/...

安装包下载地址

  • 百度网盘地址
链接: https://pan.baidu.com/s/1xLzUzZpAagtJHpnX5SjUiw 提取码: ve3u

环境准备

安装redis

yum install -y mysql-server

安装mysql

yum install -y mysql-server

二进制安装参考

  • 创建数据库用户并为用户分配权限
GRANT ALL PRIVILEGES ON *.* TO 'real_user'@'localhost' IDENTIFIED BY 'real_password' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '' WITH GRANT OPTION;

安装git

#查看linux版本信息:
$ cat /etc/redhat-release
#输入命令安装git:
$ yum install git
#等待下载,自动安装完毕,查看git版本
$ git --version

安装GO

tar -C /usr/local/ -xzvf go1.13.5.linux-amd64.tar.gz
  • 配置环境变量
vim /etc/profile
export GOROOT=/usr/local/go
export GOPATH=/home/bruce/goProject 
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOROOT/bin
export PATH=$PATH:$GOPATH/bin
export FALCON_HOME=/home/work
export WORKSPACE=$FALCON_HOME/open-falcon
source /etc/profile
mkdir -p $WORKSPACE
  • 查询安装结果
go version
go version go1.13.5 linux/amd64

安装后台

tar -xzvf open-falcon-v0.2.1.tar.gz -C $WORKSPACE
  • 在一台机器上启动所有的后端组件
#首先确认配置文件中数据库账号密码与实际相同,否则需要修改配置文件。
cd $WORKSPACE
grep -Ilr 3306  ./ | xargs -n1 -- sed -i 's/root:/real_user:real_password/g'
#编辑agent采集端配置文件
vim agent/config/cfg.json
{
    "debug": true,
    "hostname": "10.159.42.37",
    "ip": "",
    "plugin": {
        "enabled": false,
        "dir": "./plugin",
        "git": "https://github.com/open-falcon/plugin.git",
        "logs": "./logs"
    },
    "heartbeat": {
        "enabled": true,
        "addr": "10.159.44.248:6030",
        "interval": 60,
        "timeout": 1000
    },
    "transfer": {
        "enabled": true,
        "addrs": [
            "10.159.44.248:8433"
        ],
        "interval": 60,
        "timeout": 1000
    },
    "http": {
        "enabled": true,
        "listen": ":1988",
        "backdoor": false
    },
    "collector": {
        "ifacePrefix": ["eth", "em"],
        "mountPoint": []
    },
    "default_tags": {
    },
    "ignore": {
        "cpu.busy": false,
        "df.bytes.free": false,
        "df.bytes.total": false,
        "df.bytes.used": false,
        "df.bytes.used.percent": false,
        "df.inodes.total": false,
        "df.inodes.free": false,
        "df.inodes.used": false,
        "df.inodes.used.percent": false,
        "mem.memtotal": false,
        "mem.memused": false,
        "mem.memused.percent": false,
        "mem.memfree": false,
        "mem.swaptotal": false,
        "mem.swapused": false,
        "mem.swapfree": false
    }
  • 启动
cd $WORKSPACE
./open-falcon start
or
./open-falcon start agent
# 检查所有模块的启动状况
./open-falcon check
# 查看某个模块的日志
./open-falcon monitor agent
  • 更多的命令行工具用法
# ./open-falcon [start|stop|restart|check|monitor|reload] module
./open-falcon start agent

./open-falcon check
        falcon-graph         UP           53007
          falcon-hbs         UP           53014
        falcon-judge         UP           53020
     falcon-transfer         UP           53026
       falcon-nodata         UP           53032
   falcon-aggregator         UP           53038
        falcon-agent         UP           53044
      falcon-gateway         UP           53050
          falcon-api         UP           53056
        falcon-alarm         UP           53063

For debugging , You can check $WorkDir/$moduleName/log/logs/xxx.log
  • 关闭防火墙
1.使用命令:systemctl status firewalld.service查看防火墙状态
2.执行后可以看到绿色字样标注的“active(running)”,说明防火墙是开启状态
3.使用命令:systemctl stop firewalld.service 关闭运行的防火墙
4.关闭后,使用命令systemctl status firewalld.service查看防火墙状态可以看到,disavtive(dead...
5.前面的方法,一旦重启操作系统,防火墙就自动开启了,该怎么设置才能永久关闭防火墙呢?

部署前端:

建议使用centos7.0以上版本部署,python3版本兼容,如果使用centos6版本,python2版本不兼容。详细使用手册可以参考官网。
  • 参考文档

https://book.open-falcon.org/...

  • 克隆前端代码
cd $WORKSPACE
git clone https://github.com/open-falcon/dashboard.git
  • 安装依赖包
yum install -y python-virtualenv
yum install -y python-devel
yum install -y openldap-devel
yum install -y mysql-devel
yum groupinstall "Development tools"

cd $WORKSPACE/dashboard/
virtualenv ./env

./env/bin/pip install -r pip_requirements.txt -i https://pypi.douban.com/simple
  • 修改dashboard配置信息
#其中数据库用户名和密码改一下,还有API服务的访问地址
vim /home/work/open-falcon/dashboard/rrd/config.py
# Falcon+ API
API_ADDR = os.environ.get("API_ADDR","http://10.159.44.248:8080/api/v1")
API_USER = os.environ.get("API_USER","admin")
API_PASS = os.environ.get("API_PASS","password")

# portal database
# TODO: read from api instead of db
PORTAL_DB_HOST = os.environ.get("PORTAL_DB_HOST","10.159.44.248")
PORTAL_DB_PORT = int(os.environ.get("PORTAL_DB_PORT",3306))
PORTAL_DB_USER = os.environ.get("PORTAL_DB_USER","root")
PORTAL_DB_PASS = os.environ.get("PORTAL_DB_PASS","")
PORTAL_DB_NAME = os.environ.get("PORTAL_DB_NAME","falcon_portal")

# alarm database
# TODO: read from api instead of db
ALARM_DB_HOST = os.environ.get("ALARM_DB_HOST","10.159.44.248")
ALARM_DB_PORT = int(os.environ.get("ALARM_DB_PORT",3306))
ALARM_DB_USER = os.environ.get("ALARM_DB_USER","root")
ALARM_DB_PASS = os.environ.get("ALARM_DB_PASS","")
ALARM_DB_NAME = os.environ.get("ALARM_DB_NAME","alarms")
  • 启动
#以开发者模式启动
./env/bin/python wsgi.py
open http://127.0.0.1:8081 in your browser.
#生产环境启动
bash control start
open http://127.0.0.1:8081 in your browser.
  • 停止
bash control stop
  • 查看日志
bash control tail
  • 展示

da.png

邮件报警

  • 编辑ALERM服务配置文件
vim /home/work/open-falcon/alarm/config/cfg.json
#红框位置修改成邮件接收接口地址

#保存退出
wq!
  • 重启
cd /home/work/open-falcon
./open-falcon restart alerm

修改报警接口

/**
     * OPEN-FALLCON发送报警邮件
     * @param request
     * @return 消息列表json
     * @throws Exception
     */
    @RequestMapping(value = "/senderFalEmail", method = RequestMethod.POST)
    public @ResponseBody
    BaseRestResponse senderFalEmail(HttpServletRequest request){
        log.info("========进入senderFalEmail Controller");
        String to_addrs = request.getParameter("tos");
        String subject = request.getParameter("subject");
        String content = request.getParameter("content");
        log.info("========进入senderFalEmail Controller==========收件人:"+to_addrs+",主题:"+subject+",邮件内容:"+content);
        MimeMessage message = mailSender.createMimeMessage();
        try{
            MimeMessageHelper helper = new MimeMessageHelper(message, true,"utf-8");//构造消息helper,第二个参数表明这个消息是multipart类型的
            helper.setFrom("cat@usmartcare.com");
            helper.setTo(to_addrs);
            helper.setSubject(subject);
            helper.setText(content, true);//第二个参数表明这是一个HTML
            mailSender.send(message);
        }catch (MailException mailException){
            log.info("发送邮件报警异常!content="+content+mailException.getMessage());
            return new BaseRestResponseData(MessageError.EMAIL_SEND_ERROR).data(mailException.getMessage());
        }catch (MessagingException messageException){
            log.info("发送邮件报警内容异常!content="+content+messageException.getMessage());
            return new BaseRestResponseData(MessageError.EMAIL_SEND_ERROR).data(messageException.getMessage());
        }
        log.info("senderFalEmail邮件发送完毕");
        return new BaseRestResponseData();
    }

报警测试邮件展示

PROBLEM P0 Endpoint:10.199.96.152 Metric:mem.memfree.percent Tags: all(#3): 7.43705<=10 Note:服务器内存告警,请及时处理 Max:3, Current:1 Timestamp:2019-12-25 09:58:00 http://portalip:8081/portal/template/view/4


OK P0 Endpoint:10.199.96.152 Metric:mem.memfree.percent Tags: all(#3): 13.87895<=10 Note:服务器内存告警,请及时处理 Max:3, Current:1 Timestamp:2019-12-25 09:59:00 http://portalip:8081/portal/template/view/4 

Q&A

安装redis问题
安装mysql问题
mysql5.6以前创建用户的问题

查看原文

赞 0 收藏 0 评论 0

lion 关注了用户 · 2019-12-25

刘小夕 @liuyan666

本人微信公众号: 前端宇宙

写文不易,Star支持一下?

【github】https://github.com/YvetteLau/...

关注 1855

lion 发布了文章 · 2019-10-31

emqx服务器调优

优化前架构

优化前架构.jpg

主要问题

  • emqtt 2.x版本问题
  • linux 内核参数
  • erl 配置参数
haproxy问题
  • 单点
  • 配置最大连接数问题

    • 配置文件中TCP最大连接数被我设置成2049啦,这就导致TCP同时保持的最大连接只有2049个,限制了客户端连接成功率
  • 配置tcp保活时长问题

    • TCP心跳最大时长我设置啦30秒
  • 其他配置参数不合理(分发,重试策略)
  • 服务器端口重用没有开启

优化后架构

emqx高可用.jpg

优化功能点

emq版本升级
  • emqx3.1版本
  • 安装/部署
rpm -ivh emqx-centos7-v3.1.0.x86\_64.rpm
  • 启动/停止
service emqx start/stop/restart
  • 集群加入/离开
emqx_ctl cluster join emqx@ip
linux系统调优
haproxy调优
  • 安装
[root@dz home]# yum install -y pcre-devel  bzip2-devel  gcc gcc-c++ make
[root@dz home]# tar -zxvf haproxy-1.8.13.tar.gz 
[root@dz home]# cd haproxy-1.8.13
[root@dz haproxy-1.8.13]# make TARGET=linux2628 PREFIX=/usr/local/haproxy
[root@dz haproxy-1.8.13]# make install PREFIX=/usr/local/haproxy
install -d "/usr/local/haproxy/sbin"
install haproxy  "/usr/local/haproxy/sbin"
install -d "/usr/local/haproxy/share/man"/man1
install -m 644 doc/haproxy.1 "/usr/local/haproxy/share/man"/man1
install -d "/usr/local/haproxy/doc/haproxy"
for x in configuration management architecture peers-v2.0 cookie-options lua WURFL-device-detection proxy-protocol linux-syn-cookies network-namespaces DeviceAtlas-device-detection 51Degrees-device-detection netscaler-client-ip-insertion-protocol peers close-options SPOE intro; do \
    install -m 644 doc/$x.txt "/usr/local/haproxy/doc/haproxy" ; \
done
[root@dz haproxy-1.8.13]# 
[root@dz haproxy-1.8.13]# /usr/local/haproxy/sbin/haproxy -v
HA-Proxy version 1.8.13 2018/07/30
Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

[root@dz haproxy-1.8.13]# 
[root@dz haproxy-1.8.13]# mkdir /etc/haproxy
[root@dz haproxy-1.8.13]# groupadd haproxy
[root@dz haproxy-1.8.13]# useradd -s /sbin/nologin -M -g haproxy haproxy //添加haproxy运行haproxy账号并设置及属主与属组
[root@dz haproxy-1.8.13]# cp examples/haproxy.init /etc/init.d/haproxy
[root@dz haproxy-1.8.13]# chmod 755 /etc/init.d/haproxy
[root@dz haproxy-1.8.13]# chkconfig --add haproxy
[root@dz haproxy-1.8.13]# cp /usr/local/haproxy/sbin/haproxy /usr/sbin/
  • 配置参数调优
vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
#默认配置和全局配置
defaults
    log                     global
    option                  dontlognull
    option http-server-close
    # option forwardfor
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         60s
    timeout client          2m
    timeout server          2m
    timeout http-keep-alive 10s
    timeout check           10s
#配置前端的监听端口
frontend emqtt-front
    bind *:1883
    maxconn     1000000
    mode tcp
    default_backend emqtt-backend
#配置后端的转发端口
backend emqtt-backend
    balance roundrobin
    # balance source
    server emq1 10.199.96.149:9883 check inter 100000 fall 2 rise 5 weight 1
    server emq2 10.199.96.150:9883 check inter 100000 fall 2 rise 5 weight 1
    server emq3 10.199.96.152:9883 check inter 100000 fall 2 rise 5 weight 1
    # source 0.0.0.0 usesrc clientip

frontend emqtt-admin-front
    bind *:18083
    mode http
    default_backend emqtt-admin-backend

backend emqtt-admin-backend
    mode http
    balance roundrobin
    server emq1 10.199.96.149:18083 check
    server emq2 10.199.96.150:18083 check
    server emq3 10.199.96.152:18083 check
#控制台配置
listen admin_stats
        stats   enable
        bind    *:8081
        mode    http
        option  httplog
        log     global
        maxconn 10
        stats   refresh 30s
        stats   uri /admin
        stats   realm haproxy
        stats   auth admin:admin
        stats   hide-version
        stats   admin if TRUE
  • 启动
systemctl start haproxy
8月 08 09:14:34 dz haproxy[3223]: /etc/rc.d/init.d/haproxy: 第 26 行:[: =: 期待一元表达式
修改/etc/rc.d/init.d/haproxy文件
[ ${NETWORKING} = "no" ] && exit 0
改成
[ "${NETWORKING}" = "no" ] && exit 0
systemctl daemon-reload
  • 开机自启动
chkconfig haproxy on
  • 高可用
cat << EOF >> /etc/sysctl.conf
fs.file-max=2097152 
fs.nr_open=2097152
net.core.somaxconn=32768
net.ipv4.tcp_max_syn_backlog=16384
net.core.netdev_max_backlog=16384
net.ipv4.ip_local_port_range=500 65535
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.optmem_max=16777216
net.ipv4.tcp_rmem=1024 4096 16777216
net.ipv4.tcp_wmem=1024 4096 16777216
net.nf_conntrack_max=1000000
net.netfilter.nf_conntrack_max=1000000
net.netfilter.nf_conntrack_tcp_timeout_time_wait=30
net.ipv4.tcp_max_tw_buckets=1048576
net.ipv4.tcp_fin_timeout = 15
EOF


cat << EOF >>/etc/security/limits.conf
*      soft   nofile      1048576
*      hard   nofile      1048576
EOF

echo DefaultLimitNOFILE=1048576 >>/etc/systemd/system.conf 

echo session required /usr/lib64/security/pam_limits.so >>/etc/pam.d/login

cat << EOF >> /etc/sysctl.conf
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_syncookies = 1
EOF
  • keepalived部署
yum install keepalived
  • 增加配置文件
### /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     huangmeng@dyjs.com
    #  failover@firewall.loc
    #  sysadmin@firewall.loc
   }
   notification_email_from huangmeng4520@163.com
   smtp_server smtp.163.com
   smtp_connect_timeout 30
   router_id mqtt40
   vrrp_skip_check_adv_addr
#    vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    mcast_src_ip 172.16.40.22
    # unicast_peer {
    # 172.18.40.41 ##(对端IP地址)此地址一定不能忘记,vrrp need use
    # }
    virtual_ipaddress {
        172.16.40.24/24
        # 192.168.200.16
        # 192.168.200.17
        # 192.168.200.18
    }
}
  • 开机启动
systemctl enable keepalived
  • 172.16.40.24作为虚拟IPkeepalive的服务器提供外网访问

测试工具

依赖安装
yum -y install ncurses-devel openssl-devel unixODBC-devel gcc-c++  
mkdir -p /app/install $$ cd /app/install/
wget http://erlang.org/download/otp_src_21.3.tar.gz
tar -xvzf otp_src_19.0.tar.gz
cd otp_src_19.0
./configure --prefix=/usr/local/erlang --with-ssl -enable-threads -enable-smmp-support -enable-kernel-poll --enable-hipe --without-javac
make && make install
配置erl环境变量
vim /etc/profile
# erlang
export ERLPATH=/usr/local/erlang
export PATH=$ERLPATH/bin:$PATH
source /etc/profile
erl -v
安装压测软件
yum -y install git
cd /app/install
git clone https://github.com/emqtt/emqtt_benchmark.git
cd emqtt_benchmark
make
## 调整系统参数并且开始压测
sysctl -w net.ipv4.ip_local_port_range="500 65535"
echo 1000000 > /proc/sys/fs/nr_open
ulimit -n 1000000
测试指令与结果展示
[root@zhanghp2 emqtt_benchmark]# ./emqtt_bench pub  -h 192.168.199.132 -p 1883 -c 500 -I 10 -t bench21/%i -s 256
connected: 1
connected: 2
connected: 3
connected: 4
connected: 5
connected: 6
connected: 7
connected: 8
connected: 9
connected: 10
connected: 11
connected: 12
connected: 13
connected: 14
connected: 15
connected: 16
测试命令参数说明
./emqtt_bench pub --help
./emqtt_bench sub --help
报错总结
`conneted:``138`
`client``49863``EXIT: {shutdown,eaddrnotavail}`
`#分配不了端口`
`[error] [Client <``0.7267``.``0``>] CONNACK Timeout!`
`client``7590``EXIT: {shutdown,connack_timeout}`
`#链接超时`
conneted:`191`
client``49810``EXIT: {shutdown,econnrefused}`
#链接被拒绝
#查看端口号使用命令 
netstat -npta | grep <端口号> 
#查看端口号使用数量 
netstat -npta |grep <端口号> | wc -l

监控页面

1572517060128.jpg

附件

查看原文

赞 0 收藏 0 评论 0

lion 赞了文章 · 2019-10-11

haproxy部署

HAproxy安装部署

下载地址:http://www.haproxy.org/downlo...
下载后安装上传服务器安装:

tar xvf haproxy-1.7.8.tar.gz
cd haproxy-1.7.8
make TARGET=linux2632 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy
mkdir -p /usr/local/haproxy/conf

haproxy启动脚本样例

#!/bin/bash
#
# haproxy
#
# chkconfig: 35 85 15
# description: HAProxy is a free, very fast and reliable solution \
# offering high availability, load balancing, and \
# proxying for TCP and HTTP-based applications
# processname: haproxy
# config: /etc/haproxy.cfg
# pidfile: /var/run/haproxy.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

config="/usr/local/haproxy/conf/haproxy.cfg"
exec="/usr/local/haproxy/sbin/haproxy"
prog=$(basename $exec)

[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

lockfile=/var/lock/subsys/haproxy

check() {
    $exec -c -V -f $config
}

start() {
    $exec -c -q -f $config
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    fi
 
    echo -n $"Starting $prog: "
    # start it up here, usually something like "daemon $exec"
    daemon $exec -D -f $config -p /var/run/$prog.pid
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    # stop it here, often "killproc $prog"
    killproc $prog 
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    $exec -c -q -f $config
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    fi
    stop
    start
}

reload() {
    $exec -c -q -f $config
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    fi
    echo -n $"Reloading $prog: "
    $exec -D -f $config -p /var/run/$prog.pid -sf $(cat /var/run/$prog.pid)
    retval=$?
    echo
    return $retval
}

force_reload() {
    restart
}

fdr_status() {
    status $prog
}

case "$1" in
    start|stop|restart|reload)
        $1
        ;;
    force-reload)
        force_reload
        ;;
    checkconfig)
        check
        ;;
    status)
        fdr_status
        ;;
    condrestart|try-restart)
      [ ! -f $lockfile ] || restart
    ;;
    *)
        echo $"Usage: $0 {start|stop|status|checkconfig|restart|try-restart|reload|force-reload}"
        exit 2
esac

haproxy配置文件样例

global
log 127.0.0.1   local3
maxconn 65535
chroot /usr/local/haproxy
uid 1000
gid 1000
daemon
nbproc 1
pidfile /var/run/haproxy.pid

defaults
log     127.0.0.1       local3
mode    http
option  httplog
option  httpclose
option  dontlognull
option  forwardfor
option  redispatch
retries 2
maxconn 2000
balance roundrobin
stats   uri     /status
stats auth admin:admin123
timeout connect    5000
timeout client     50000
timeout server     50000
listen  web_proxy
        bind 0.0.0.0:80
        mode http
        balance roundrobin
        cookie SERVERID insert indirect nocache
        option httpclose
        option forwardfor
        option accept-invalid-http-request
        option httpchk HEAD /index.php HTTP/1.0
        server web01 10.0.11.156:80  weight 5 check inter 2000  fall 3
        server web02 10.0.11.157:80  weight 5 check inter 2000  fall 3
        server web03 10.0.11.158:80  weight 5 check inter 2000  fall 3
        server web04 10.0.11.159:80  weight 5 check inter 2000  fall 3

haproxy日志添加

vim /etc/syslog.conf
添加:
local3.* /var/log/haproxy.log
local0.* /var/log/haproxy.log

vim /etc/sysconfig/syslog
修改:
SYSLOGD_OPTIONS="-r -m 0"
service syslog restart
查看原文

赞 1 收藏 2 评论 0

lion 收藏了文章 · 2019-10-08

EMQ 配置

系统版本: ubuntu 16.04 LTS
EMQ 版本: 2.3.11

我使用的代理就是 EMQ(emqttd) 介绍或者详细的配置可以到官网看看. 我这里只写一些主要的配置.

下载和安装

我从官网下载的是 emqttd-ubuntu16.04-v2.3.11_amd64.deb 下载后直接双击安装即可.

启动

安装后在控制台输出 sudo emqttd console 来启动代理. 启动后会输出如下信息.

Exec: /usr/lib/emqttd/erts-9.0/bin/erlexec -boot /usr/lib/emqttd/releases/2.3.11/emqttd -mode embedded -boot_var ERTS_LIB_DIR /usr/lib/emqttd/erts-9.0/../lib -mnesia dir "/var/lib/emqttd/mnesia/emq@127.0.0.1" -config /var/lib/emqttd/configs/app.2018.10.03.18.37.02.config -args_file /var/lib/emqttd/configs/vm.2018.10.03.18.37.02.args -vm_args /var/lib/emqttd/configs/vm.2018.10.03.18.37.02.args -- console
Root: /usr/lib/emqttd
/usr/lib/emqttd
Erlang/OTP 20 [erts-9.0] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:32] [hipe] [kernel-poll:true]

starting emqttd on node 'emq@127.0.0.1'
emqttd ctl is starting...[ok]
emqttd hook is starting...[ok]
emqttd router is starting...[ok]
emqttd pubsub is starting...[ok]
emqttd stats is starting...[ok]
emqttd metrics is starting...[ok]
emqttd pooler is starting...[ok]
emqttd trace is starting...[ok]
emqttd client manager is starting...[ok]
emqttd session manager is starting...[ok]
emqttd session supervisor is starting...[ok]
emqttd wsclient supervisor is starting...[ok]
emqttd broker is starting...[ok]
emqttd alarm is starting...[ok]
emqttd mod supervisor is starting...[ok]
emqttd bridge supervisor is starting...[ok]
emqttd access control is starting...[ok]
emqttd system monitor is starting...[ok]
emqttd 2.3.11 is running now
Eshell V9.0  (abort with ^G)
(emq@127.0.0.1)1> Load emq_mod_presence module successfully.
dashboard:http listen on 0.0.0.0:18083 with 4 acceptors.
mqtt:tcp listen on 127.0.0.1:11883 with 4 acceptors.
mqtt:tcp listen on 0.0.0.0:1883 with 16 acceptors.
mqtt:ws listen on 0.0.0.0:8083 with 4 acceptors.
mqtt:ssl listen on 0.0.0.0:8883 with 16 acceptors.
mqtt:wss listen on 0.0.0.0:8084 with 4 acceptors.
mqtt:api listen on 0.0.0.0:8080 with 4 acceptors.
也可以用这两个命令来启停
systemctl start emqttd
systemctl stop emqttd

目录

目录说明
/usr/lib/emqttd/所有的可执行文件包括插件
/etc/emqttd/保存所有配置文件包括插件配置

配置

Erlang 虚拟机主要参数说明

属性名说明
node.process_limitErlang 虚拟机允许的最大进程数,一个 MQTT 连接会消耗2个 Erlang 进程,所以参数值 > 最大连接数 * 2
node.max_portsErlang 虚拟机允许的最大 Port 数量,一个 MQTT 连接消耗1个 Port,所以参数值 > 最大连接数
node.dist_listen_minErlang 分布节点间通信使用 TCP 连接端口范围。注: 节点间如有防火墙,需要配置该端口段
node.dist_listen_maxErlang 分布节点间通信使用 TCP 连接端口范围。注: 节点间如有防火墙,需要配置该端口段

日志参数配置

console 日志

## Console log. Enum: off, file, console, both
log.console = console

## Console log level. Enum: debug, info, notice, warning, error, critical, alert, emergency
log.console.level = error

## Console log file
## log.console.file = log/console.log

error 日志

## Error log file
log.error.file = log/error.log

crash 日志

## Enable the crash log. Enum: on, off
log.crash = on

log.crash.file = log/crash.log

syslog 日志

## Syslog. Enum: on, off
log.syslog = on

##  syslog level. Enum: debug, info, notice, warning, error, critical, alert, emergency
log.syslog.level = error

MQTT 协议参数配置

ClientId 最大允许长度

## Max ClientId Length Allowed.
mqtt.max_clientid_len = 1024

MQTT 最大报文尺寸

## Max Packet Size Allowed, 64K by default.
mqtt.max_packet_size = 64KB

客户端连接闲置时间

设置 MQTT 客户端最大允许闲置时间(Socket 连接建立, 但未收到 CONNECT 报文):

## Client Idle Timeout (Second)
mqtt.client.idle_timeout = 30

启用客户端连接统计

## Enable client Stats: on | off
mqtt.client.enable_stats = off

强制 GC 设置

## Force GC: integer. Value 0 disabled the Force GC.
mqtt.conn.force_gc_count = 100

关于插件的配置参考这里

MQTT 认证设置

EMQ 消息服务器认证由一系列认证插件(Plugin)提供,系统支持按用户名密码、ClientID 或匿名认证.

系统默认开启匿名认证(anonymous),通过加载认证插件可开启的多个认证模块组成认证链:

           ----------------           ----------------           ------------
Client --> | Username认证 | -ignore-> | ClientID认证 | -ignore-> | 匿名认证 |
           ----------------           ----------------           ------------
                  |                         |                         |
                 \|/                       \|/                       \|/
            allow | deny              allow | deny              allow | deny

如果开启 用户名和密码 认证以及 客户ID认证时, 我们会先判断 用户名和密码 如果认证成功会忽略客户端ID, 如果认证失败则会去认证客户端ID.

⚠️ 客户端ID也需要配置密码.
用户名和密码以及客户端ID都是唯一的, 而且连接到代理的会话也是唯一的. 二次连接代理会将前一个下线.
查看原文

lion 发布了文章 · 2019-09-12

mysql主从搭建

背景

后台工程师兼职做系统运维工程师不容易,码字留念。入司前,研发团队后台开发一个都没有,入职时跟一群外包同事摸爬滚打3个月,算是工作交接,因为外包朋友算是拿多少钱干多少活的主,所以数据库这块一直是单点部署,有时候做梦都梦见主库挂掉啦,数据无法恢复,直接打包走人的场景,本着数据是一个公司的生命的认知力,决定把数据库这块做个实时备份的从库,然后就在网上各种搜爬,网上教程倒不少,但是跟着配了很多方案都没有配置成功,最后还是借助以前老哥 @威哥 的淫威之下才把任务完成。下面我们开始吧。

环境配置

|名称|版本|IP|备注|配置|

|cengtos|6.8|192.168.199.129|master|1核2g虚拟机|

|cengtos|6.8|192.168.199.131|slave|1核2g虚拟机|

|mysql|5.6.33|-|mysql-5.6.33-linux-glibc2.5-x86_64.tar.gz|官网下载|

安装mysql

  • 解压

tar -zxvf mysql-5.6.33-linux-glibc2.5-x86_64.tar.gz

cp -r mysql-5.6.33-linux-glibc2.5-x86_64 /usr/local/mysql
  • 添加用户组和用户

#添加用户组
groupadd mysql
#添加用户mysql 到用户组mysql
useradd -g mysql mysql
  • 安装

#创建mysql数据存储目录

cd /usr/local/mysql/
mkdir ./data/mysql
#修改成mysql权限
chown -R mysql:mysql ./
#
./scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data/mysql
#把mysqld服务加到系统服务中
cp support-files/mysql.server /etc/init.d/mysqld
#修改可执行权限
chmod 755 /etc/init.d/mysqld
#mysql启动配置文件默认读取路径
cp support-files/my-default.cnf /etc/my.cnf

#修改启动脚本
vi /etc/init.d/mysqld
#修改项:
basedir=/usr/local/mysql/

datadir=/usr/local/mysql/data/mysql
#启动服务
service mysqld start
#测试连接
./mysql/bin/mysql -uroot
#加入环境变量,编辑 /etc/profile,这样可以在任何地方用mysql命令了
export PATH=$PATH:/usr/local/mysql//bin
source /etc/profile
#启动mysql
service mysqld start
#关闭mysql
service mysqld stop
#查看运行状态
service mysqld status

#授权

use mysql;
CREATE USER canal IDENTIFIED BY 'salve';
#只读权限从库用户权限
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'salve'@'%';
-- GRANT ALL PRIVILEGES ON *.* TO 'slave'@'%' ;
FLUSH PRIVILEGES;
或
GRANT ALL PRIVILEGES ON *.* TO `salve`@'%’ IDENTIFIED BY ‘youpassword’ WITH GRANT OPTION;
FLUSH PRIVILEGES;

两台服务器都执行以上安装操作

主从配置

  • 方式:GTID方式

GTID
Global transaction identifiers
可以理解为一个事务对应一个唯一ID
一个GTID在一个服务器上只会执行一次
GTID是用来代替传统复制的方法
MySQl-5.6.2开始支持,5.6.10后完善了
GTID的组成
Server_uuid:sequence number 
  • 配置注意

修改主库192.168.199.129:/etc/my.cnf文件,需要加入以下参数 需要放在[mysqld]下面


gtid_mode=on

enforce-gtid-consistency=on

server-id = 813316

log-bin = /usr/local/mysql/data

binlog_format = row

log-slave-updates=1

binlog-gtid-simple-recovery=1
  • 关闭/重启服务

[root@localhost ~]# service mysqld restart

Starting MySQL.... SUCCESS!
  • 登录mysql,查询主库配置信息是否生效

[root@localhost etc]# mysql -uroot

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 1

Server version: 5.6.33-log MySQL Community Server (GPL)



Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.



Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.



Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.



mysql> show master status;

+-------------+----------+--------------+------------------+------------------------------------------+

| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+-------------+----------+--------------+------------------+------------------------------------------+

| data.000004 | 191 | | | fd736651-d0a2-11e9-b357-000c293bc199:1-3 |

+-------------+----------+--------------+------------------+------------------------------------------+

1 row in set (0.00 sec)
  • 修改从库配置信息,192.168.199.131:/etc/my.cnf

gtid_mode=on

enforce-gtid-consistency=on

server-id = 813317

log-bin = /usr/local/mysql/data

binlog_format = row

log-slave-updates=1

binlog-gtid-simple-recovery=1

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
  • 启动后查看GTID是否生效

[root@zhanghp2 etc]# mysql -uroot

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 4

Server version: 5.6.33-log MySQL Community Server (GPL)



Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.



Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.



Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.



mysql> show master status;

+-------------+----------+--------------+------------------+------------------------------------------+

| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+-------------+----------+--------------+------------------+------------------------------------------+

| data.000004 | 191 | | | fd736651-d0a2-11e9-b357-000c293bc199:1-3 |

+-------------+----------+--------------+------------------+------------------------------------------+

1 row in set (0.00 sec)

以上配置结束,下面开始导出原有数据,然后倒入到从库中。

  • 导出数据

#-A是导出全库, 可以把-A换成数据库的名字, 比如liuxn liuxn3316

/usr/local/mysql/bin/mysqldump -uroot -pchenw44 -S /tmp/mysql.sock --master-data=2 --single-transaction -A >/home/mysql/db.sql
  • 导入数据

#注意如果是一个坏掉的库可能会报错是一个关于GTID的值应该为空的错误提示,需要在库上执行 mysql> reset master

#查询gtid是否开启成功

show global variables like "%gtid%";

#导入

mysql -S /tmp/mysql3317.sock </home/mysql/db.sql

#导入后由于mysql库也导入了,需要这样连接进入

mysql -uroot -S /tmp/mysql3307.sock
  • 从库配置开始同步

#清除之前的从库配置信息

reset slave all

#master_auto_position可以自动开启查找日志功能, 再自己查找了

change master to master_host='10.10.1.81',MASTER_PORT=3316,master_user='repl',master_password='repl4slave',master_auto_position=1;
  • 查看同步配置show slave statusG;

mysql> show slave status\G;

*************************** 1. row ***************************

               Slave_IO_State: 

                  Master_Host: 192.168.199.129

                  Master_User: root

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: data.000002

          Read_Master_Log_Pos: 1913

               Relay_Log_File: zhanghp2-relay-bin.000003

                Relay_Log_Pos: 4

        Relay_Master_Log_File: data.000002

             Slave_IO_Running: No

            Slave_SQL_Running: No

              Replicate_Do_DB: 

          Replicate_Ignore_DB: 

           Replicate_Do_Table: 

       Replicate_Ignore_Table: 

      Replicate_Wild_Do_Table: 

  Replicate_Wild_Ignore_Table: 

                   Last_Errno: 0

                   Last_Error: 

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 1913

              Relay_Log_Space: 240

              Until_Condition: None

               Until_Log_File: 

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File: 

           Master_SSL_CA_Path: 

              Master_SSL_Cert: 

            Master_SSL_Cipher: 

               Master_SSL_Key: 

        Seconds_Behind_Master: NULL

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 2003

                Last_IO_Error: error connecting to master 'root@192.168.199.129:3306' - retry-time: 60 retries: 44

               Last_SQL_Errno: 0

               Last_SQL_Error: 

  Replicate_Ignore_Server_Ids: 

             Master_Server_Id: 0

                  Master_UUID: fd736651-d0a2-11e9-b357-000c293bc199

             Master_Info_File: /usr/local/mysql/data/mysql/master.info

                    SQL_Delay: 0

          SQL_Remaining_Delay: NULL

      Slave_SQL_Running_State: 

           Master_Retry_Count: 86400

                  Master_Bind: 

      Last_IO_Error_Timestamp: 190912 18:42:07

     Last_SQL_Error_Timestamp: 

               Master_SSL_Crl: 

           Master_SSL_Crlpath: 

           Retrieved_Gtid_Set: 

            Executed_Gtid_Set: fd736651-d0a2-11e9-b357-000c293bc199:1-3

                Auto_Position: 1

1 row in set (0.00 sec)



ERROR: 

No query specified
  • 开始同步start slave

mysql> start slave;

Query OK, 0 rows affected (0.08 sec)
  • 查看同步状态

mysql> show slave status\G;

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 10.199.96.147

                  Master_User: root

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: data.000001

          Read_Master_Log_Pos: 344899576

               Relay_Log_File: goodairnbapp04-relay-bin.000005

                Relay_Log_Pos: 330811707

        Relay_Master_Log_File: data.000001

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB: 

          Replicate_Ignore_DB: 

           Replicate_Do_Table: 

       Replicate_Ignore_Table: 

      Replicate_Wild_Do_Table: 

  Replicate_Wild_Ignore_Table: 

                   Last_Errno: 0

                   Last_Error: 

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 344899576

              Relay_Log_Space: 330811889

              Until_Condition: None

               Until_Log_File: 

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File: 

           Master_SSL_CA_Path: 

              Master_SSL_Cert: 

            Master_SSL_Cipher: 

               Master_SSL_Key: 

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error: 

               Last_SQL_Errno: 0

               Last_SQL_Error: 

  Replicate_Ignore_Server_Ids: 

             Master_Server_Id: 813316

                  Master_UUID: 641e091b-93fb-11e8-a27d-801844ee17a0

             Master_Info_File: /export/mysql/data/mysql/master.info

                    SQL_Delay: 0

          SQL_Remaining_Delay: NULL

      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

           Master_Retry_Count: 86400

                  Master_Bind: 

      Last_IO_Error_Timestamp: 

     Last_SQL_Error_Timestamp: 

               Master_SSL_Crl: 

           Master_SSL_Crlpath: 

           Retrieved_Gtid_Set: 641e091b-93fb-11e8-a27d-801844ee17a0:1842-304615

            Executed_Gtid_Set: 641e091b-93fb-11e8-a27d-801844ee17a0:1-304615



#  Retrieved_Gtid_Set: 641e091b-93fb-11e8-a27d-801844ee17a0:1842-304615

表示从第1842个开始接收现在是第304615个

# Executed_Gtid_Set: 641e091b-93fb-11e8-a27d-801844ee17a0:1-304615

表示执行了304615个事务

验证

  • 在主库test库上建一个新表测试

CREATE TABLE `auth_client` (

  `id` int(11) NOT NULL AUTO_INCREMENT,

  `code` varchar(255) DEFAULT NULL COMMENT '服务编码',

  `secret` varchar(255) DEFAULT NULL COMMENT '服务密钥',

  `name` varchar(255) DEFAULT NULL COMMENT '服务名',

  `locked` char(1) DEFAULT NULL COMMENT '是否锁定',

  `description` varchar(255) DEFAULT NULL COMMENT '描述',

  `crt_time` datetime DEFAULT NULL COMMENT '创建时间',

  `crt_user` varchar(255) DEFAULT NULL COMMENT '创建人',

  `crt_name` varchar(255) DEFAULT NULL COMMENT '创建人姓名',

  `crt_host` varchar(255) DEFAULT NULL COMMENT '创建主机',

  `upd_time` datetime DEFAULT NULL COMMENT '更新时间',

  `upd_user` varchar(255) DEFAULT NULL COMMENT '更新人',

  `upd_name` varchar(255) DEFAULT NULL COMMENT '更新姓名',

  `upd_host` varchar(255) DEFAULT NULL COMMENT '更新主机',

  `attr1` varchar(255) DEFAULT NULL,

  `attr2` varchar(255) DEFAULT NULL,

  `attr3` varchar(255) DEFAULT NULL,

  `attr4` varchar(255) DEFAULT NULL,

  `attr5` varchar(255) DEFAULT NULL,

  `attr6` varchar(255) DEFAULT NULL,

  `attr7` varchar(255) DEFAULT NULL,

  `attr8` varchar(255) DEFAULT NULL,

  PRIMARY KEY (`id`)

) ENGINE=InnoDB AUTO_INCREMENT=19 DEFAULT CHARSET=utf8mb4
  • 在从库查询是否同步

mysql> show tables;

+----------------+

| Tables_in_test |

+----------------+

| auth_client |

+----------------+

1 row in set (0.00 sec)

bingo生效啦!!!

A & Q

  • mysql启动不成功,查看mysql日志

190910 16:55:17 mysqld_safe mysqld from pid file /export/mysql/data/mysql/goodairnbapp03.pid ended

190910 16:55:52 mysqld_safe Starting mysqld daemon with databases from /export/mysql/data/mysql

2019-09-10 16:55:53 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).

2019-09-10 16:55:53 0 [Note] /usr/local/mysql/mysql/bin/mysqld (mysqld 5.6.33-log) starting as process 33313 ...

/usr/local/mysql/mysql/bin/mysqld: File '/export/mysql/data.index' not found (Errcode: 13 - Permission denied)

# 数据存储目录权限问题

chown -R mysql:mysql /export/mysql
  • start slave报错,开始同步不成功

show slave status \G; 

Slave_IO_Running: YesSlave_SQL_Running: No

2016-06-09 00:07:07 23352 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)

2016-06-09 00:07:07 23352 [Note] Slave I/O thread killed while reading event

2016-06-09 00:07:07 23352 [Note] Slave I/O thread exiting, read up to log 'mysql-bin-190.000667', position 9049889

2016-06-09 00:07:07 23352 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.

2016-06-09 00:07:07 23352 [Warning] Slave SQL: If a crash happens this configuration does not guarantee that the relay log info will be consistent, Error_code: 0

2016-06-09 00:07:07 23352 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin-190.000666' at position 2935199, relay log './mysql3306-relay-bin.000002' position: 2935366

2016-06-09 00:07:07 23352 [Note] 'SQL_SLAVE_SKIP_COUNTER=1' executed at relay_log_file='./mysql3306-relay-bin.000002', relay_log_pos='2935366', master_log_name='mysql-bin-190.000666', master_log_pos='2935199' and new position at relay_log_file='./mysql3306-relay-bin.000002', relay_log_pos='2935750', master_log_name='mysql-bin-190.000666', master_log_pos='2935583' 

2016-06-09 00:07:07 23352 [Note] Slave I/O thread: connected to master 'sync@10.251.192.108:3306',replication started in log 'mysql-bin-190.000667' at position 9049889

2016-06-09 00:07:08 23352 [ERROR] Slave SQL: Could not execute Write_rows event on table asy.pm_camera_recordrate; Duplicate entry '913df478e36f4f888505874ddec59240' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin-190.000666, end_log_pos 2944382, Error_code: 1062

2016-06-09 00:07:08 23352 [Warning] Slave: Duplicate entry '913df478e36f4f888505874ddec59240' for key 'PRIMARY' Error_code: 1062

解决方式:在my.cnf文件中加入如下代码到[mysqld]。重启mysql


slave-skip-errors = 1062

如果能预估到错误数据比较少。也可以用如下代码。每次执行只跳过一个事务。


stop slave; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1 ; start slave ;

跳过1062和1032错误 。可能会导致日志中出现警告信息。所以在在my.cnf文件中加入如下代码到[mysqld]。重启mysql可解决问题。


binlog_format=mixed
反思问题起因
  • 在从库由于使用了 kill -9 杀掉了mysql线程。可能导致了mysql的事务回滚。也有可能导致了mysql中继日志出现问题。
  • 在拉取备份文件恢复的时候。由于拉取了最新的备份数据。恢复数据的时候。只是重设了同步的binlog文件和pos点。并没有修改中继日志的起点。导致了中继日志中的数据应该是比数据库中的备份点数据更早了。然后产生了1062主键冲突和1032删除数据不存在的错误。
查看原文

赞 3 收藏 2 评论 2

lion 发布了文章 · 2019-08-24

ELK-MACOS搭建部署(包含集群)

简介

Elasticsearch

Elasticsearch是一个实时的分布式搜索分析引擎, 它能让你以一个之前从未有过的速度和规模,去探索你的数据。它被用作全文检索、结构化搜索、分析以及这三个功能的组合。支持集群配置。

Logstash/Filebeats

Logstash是一款强大的数据处理工具,它可以实现数据传输,格式处理,格式化输出,还有强大的插件功能,常用于日志处理。。

Kibana

kibana是一个开源和免费的工具,它可以为Logstash和ElasticSearch提供的日志分析友好的Web界面,可以帮助您汇总、分析和搜索重要数据日志。

架构流程

图片描述

安装配置

版本

  • Elasticsearch
  • Logstash
  • Kibana
  • Filebeats

先决条件

  • java8
  • mac软件管理工具 brew

brew

# 安装软件
brew install your-software
# 查看软件安装信息
brew info your-software
#管理服务,没怎么用它,ELK都有自己的启动脚本在安装目录的bin/下面,且基本上都会携带参数启动
brew services start/stop your-service

Elasticsearch

mac安装elasticsearch
#mac安装elasticsearch
brew install elasticsearch        
elasticsearch的相关安装位置
安装目录:/usr/local/Cellar/elasticsearch/{elasticsearch-version}/
日志目录:/usr/local/var/log/elasticsearch/
插件目录:/usr/local/var/elasticsearch/plugins/
配置目录:/usr/local/etc/elasticsearch/
启动
brew services start elasticsearch
首次启动,默认的端口号是9200,用户名是elastic,密码我也不知道(资料上查到的都是6.0以前的版本,密码是changeme,6.0以后不清楚),通过调用_xpack接口修改默认密码:
版本
elasticsearch --version
Version: 6.6.1, Build: oss/tar/1fd8f69/2019-02-13T17:10:04.160291Z, JVM: 1.8.0_131

Kibana

mac安装kibana
brew install kibana
安装位置
安装目录:/usr/local/Cellar/kibana/{kibana-version}/
配置目录:/usr/local/etc/kibana/
备注
启动kibana之前,需要先修改一下配置文件/usr/local/etc/kibana/kibana.yml,取消elasticsearch.name和elasticsearch.password的注释,并将值改为上面修改过的用户名密码username: elastic, password: 123456,请参考下面的kibana.yml片段
# kibana.yml
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "elastic"
elasticsearch.password: "changeme"
启动
brew services start kibana
首次启动,默认端口号是5601,打开浏览器访问http://localhost:5601访问kibana管理页面,会弹框要求输入用户名密码,输入elastic和123456即可。
注:这里的kibana.yml中配置的用户名密码是kibana访问elasticsearch需要用到的,而web页面手动输入的用户名密码是我们登录kibana管理页面的密码,它们为什么能共用一个密码,不太清楚。
版本
kibana  --version
6.6.1

Logstash

mac安装logstash
brew install logstash
logstash的相关安装位置
安装目录:/usr/local/Cellar/logstash/{logstash-version}/ 
配置目录:/usr/local/etc/logstash
配置
vim ./first-pipeline.conf
  • 支持Filebeat作为输入源
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
  beats {
    host =>"127.0.0.1"
    port => "5044"
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}
  • logstash 配置文件输入支持文件输入,例如:
[root@access-1 logstash-7.2.1]# cat logstash_809.conf
input {
    file{
        path => ['/opt/access-server-1.0.5/log/akka-gb809.log'] #读取日志文件路径
 type => "akka-gb809" #一个标签
 stat_interval => "2" #每隔几秒读取日志文件,默认为1秒
    }
    file{
 path => ['/opt/access-server-1.0.5/log/akka-gb808.log']
 type => "akka-gb808"
 stat_interval => "2"
    }
    file{
 path => ['/opt/access-server-1.0.5/log/akka.log']
 type => "akka"
 stat_interval => "2"
    }
    file{
 path => ['/opt/access-server-1.0.5/log/all_error.log']
 type => "all_error"
 stat_interval => "2"
 codec => multiline { #将换行的日志打印出来
  pattern => "(^\d{2}\:\d{2}\:\d{2}\.\d{3})UTC" #匹配的正则
  negate => true
  what => "previous"
     }
   }
}

filter {
      date {
         match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]    
     }
}
output {
 if [type] == "akka-gb809" { #要匹配的日志文件标签
    elasticsearch {
       hosts => "192.168.108.151:9200" #es节点地址
       index => "access-1-akka-gb809" #生成的索引,用于kibana展示
     }
  }

 if [type] == "akka-gb808" {
    elasticsearch {
       hosts => "192.168.108.151:9200"
       index => "access-1-akka-gb808"
     }
  }
 if [type] == "akka" {
    elasticsearch {
       hosts => "192.168.108.151:9200"
       index => "access-1-akka"
     }
  }
 if [type] == "all_error" {
    elasticsearch {
       hosts => "192.168.108.151:9200"
       index => "access-1-all_error"
     }
  }
}
启动
logstash -e 'input { stdin { } } output { stdout {} }'

logstash -f config/first-pipeline.conf --config.test_and_exit

此条命令检验配置文件是否正确

logstash -f config/first-pipeline.conf --config.reload.automatic

此条命令是启动logstash,并且在first-pipeline.conf文件变更时自动重启。
后台启动

nohup logstash -f config/first-pipeline.conf --config.reload.automatic & > /dev/null
版本
logstash 6.6.1

Filebeats

安装
#mac安装Filebeats'
brew install filebeat
位置
安装目录:/usr/local/Cellar/filebeat/{filebeat-version}/
配置目录:/usr/local/etc/filebeat/
缓存目录:/usr/local/var/lib/filebeat/
配置
vim /usr/local/etc/filebeat//filebeat.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /apps/intelligent-family-console/intelligentFamilyConsole/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  # level: debug
  # review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

主要是配置filebeat.inputs,采集哪些日志;关闭output.elasticsearch,打开output.logstash,将收集到的信息推送到logstash。

启动
filebeat -e -c ./filebeat6.3.2/filebeat.yml

nohup filebeat -e -c ./filebeat6.3.2/filebeat.yml & >/dev/null
版本
filebeat --version
lag --version has been deprecated, version flag has been deprecated, use version subcommand
filebeat version 6.2.4 (amd64), libbeat 6.2.4  

Kibana案例

创建Index patterns

检索界面

  • 左侧为可检索条件


后续跟进

日志定时删除问题

Elasticsearch集群部署

下载解压
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.1-linux-x86_64.tar.gz
tar -zvxf elasticsearch-7.2.1-linux-x86_64.tar.gz -C /usr/local/elk
创建用户及授权

ElasticSerach要求以非root身份启动,在每个节点创建用户及用户组

[root@elk-1 ~]# groupadd elasticsearch 
[root@elk-1 ~]# useradd elasticsearch -g elasticsearch 

在每个节点上创建数据data和logs目录:


[root@elk-1 ~]# mkdir -p /data/elasticsearch/{data,logs}

[root@elk-1 ~]# chown -R elasticsearch. /data/elasticsearch/

[root@elk-1 ~]# chown -R elasticsearch. /home/elk/elasticsearch/elasticsearch-7.2.1
修改elasticsearch.yml配置文件
  • master节点配置文件

[root@elk-1 config]# grep -Ev "^$|^[#;]" elasticsearch.yml

cluster.name: master-node

node.name: master

node.master: true

node.data: true

http.cors.enabled: true

http.cors.allow-origin: /.*/

path.data: /home/elk/data

network.host: 0.0.0.0

http.port: 9200

discovery.seed_hosts: ["192.168.108.151", "192.168.108.152", "192.168.108.153"]

cluster.initial_master_nodes: ["master", "data-node1","data-node2"]
  • node1节点配置文件

[root@elk-2 config]# grep -Ev "^$|^[#;]" elasticsearch.yml

cluster.name: master-node

node.name: data-node1

node.master: true

node.data: true

path.data: /home/elk/data

network.host: 0.0.0.0

http.port: 9200

discovery.seed_hosts: ["192.168.108.151", "192.168.108.152", "192.168.108.153"]

cluster.initial_master_nodes: ["master-node", "data-node1","data-node2"]
  • node2节点配置文件

[root@elk-3 config]# grep -Ev "^$|^[#;]" elasticsearch.yml

cluster.name: master-node

node.name: data-node2

node.master: true

node.data: true

path.data: /home/elk/data

network.host: 0.0.0.0

http.port: 9200

discovery.seed_hosts: ["192.168.108.151", "192.168.108.152", "192.168.108.153"]

cluster.initial_master_nodes: ["master", "data-node1","data-node2"]
  • 修改elasticsearch的JVM内存

[root@elk-1 config]# grep -Ev "^$|^[#;]" jvm.options -Xms1g -Xmx1g
  • 启动elasticsearch

[root@ELK1 elk]# su - elasticsearch

Last login: Mon Aug 12 09:58:23 CST 2019 on pts/1



[elasticsearch@ELK1 ~]$ cd /home/elk/elasticsearch-7.2.1/bin/

[elasticsearch@ELK1 bin]$ ./elasticsearch -d
  • 查看端口号,分别为9200和9300

[root@elk-1 config]# ss -tlunp|grep java

tcp LISTEN 0 128 :::9200 :::* users:(("java",pid=50257,fd=263))

tcp LISTEN 0 128 :::9300 :::* users:(("java",pid=50257,fd=212))
  • es集群基本操作

#查看集群的健康信息

curl 'localhost:9200/_cluster/health?pretty'



#查看集群的详细信息

curl ' localhost:9200/_cluster/state?pretty'



#查询索引列表

curl -XGET http:// localhost:9200/_cat/indices?v



#创建索引

curl -XPUT http:// localhost:9200/customer?pretty



#查询索引

curl -XGET http:// localhost:9200/customer/external/1?pretty



#删除索引

curl -XDELETE http:// localhost:9200/customer?pretty



#删除指定索引

curl -XDELETE localhost:9200/nginx-log-2019.08



#删除多个索引

curl -XDELETE localhost:9200/system-log-2019.0606,system-log-2019.0607



#删除所有索引

curl -XDELETE localhost:9200/_all



#在删除数据时,通常不建议使用通配符,误删后果会很严重,所有的index都可能被删除,为了安全起见需要禁止通配符,可以在elasticsearch.yml配置文件中设置禁用_all和*通配符

action.destructive_requires_name: true

Elasticsearch Head插件


参考

https://blog.csdn.net/ljx1528...

https://blog.csdn.net/zhengde...

https://blog.csdn.net/callmep...

http://www.mamicode.com/info-...

https://blog.csdn.net/Ahri_J/...

https://www.dgstack.cn/archiv...

https://www.jqhtml.com/49585....

查看原文

赞 1 收藏 1 评论 0

lion 关注了专栏 · 2019-06-26

宜信技术学院

宜信技术学院是宜信旗下的金融科技平台。专注分享金融科技深度文章。

关注 11386

认证与成就

  • 获得 9 次点赞
  • 获得 1 枚徽章 获得 0 枚金徽章, 获得 0 枚银徽章, 获得 1 枚铜徽章

擅长技能
编辑

开源项目 & 著作
编辑

(゚∀゚ )
暂时没有

注册于 2017-07-11
个人主页被 271 人浏览