SegmentFault docker的问题
2024-03-24T16:06:16+08:00
https://segmentfault.com/feeds/tag/docker
https://creativecommons.org/licenses/by-nc-nd/4.0/
解析Windows Docker Desktop HyperV模式虚拟机的网络设置?
https://segmentfault.com/q/1010000044741001
2024-03-24T16:06:16+08:00
2024-03-24T16:06:16+08:00
苍丿鳞
https://segmentfault.com/u/canglin_640a83bf54819
0
<p>windows上面的docker desktop 的hyperv模式是如何联网的</p><p>windows上的docker的hyperv模式会创建一个虚拟机,但是这个虚拟机没有任何网卡,却可以上网,这是为什么?<br><img width="723" height="645" src="/img/bVdbTkQ" alt="image.png" title="image.png"></p>
Docker中Redis容器IP地址固定为127.0.0.1怎么解决?
https://segmentfault.com/q/1010000044735334
2024-03-21T21:37:58+08:00
2024-03-21T21:37:58+08:00
Ginfai
https://segmentfault.com/u/ginfai
0
<p>docker中的redis容器IP地址一直都是127.0.0.1,无法改变?</p><p>背景:把Springboot项目部署到docker了,然后在运行项目时,<em>Caused by: org.redisson.client.RedisConnectionException: Unable to connect to Redis server: /127.0.0.1:6379</em><br>尝试:1、修改了redis.conf<img width="723" height="219" src="/img/bVdbRRy" alt="0c77096b52b4e182b803527e2a006d4.png" title="0c77096b52b4e182b803527e2a006d4.png"><br>2、修改项目的配置文件<img width="723" height="404" src="/img/bVdbRRI" alt="image.png" title="image.png"><img width="343" height="203" src="/img/bVdbRRA" alt="cbfb10cf7845a6fc1fadd68b68a02bb.png" title="cbfb10cf7845a6fc1fadd68b68a02bb.png"><br>3、这是redis容器,name为myredis,而且mysql和myredis都connect到了heima网络里面了<img width="723" height="61" src="/img/bVdbRRM" alt="image.png" title="image.png"><br>在修改配置后,我都有重启各个容器,但仍然无效</p><p><strong>问题已经解决</strong><br>问题是在Java程序这一端</p><pre><code class="java">public class RedissonConfig {
@Value("${spring.redis.host}")
private String redisHost;
@Bean
public RedissonClient redissonClient(){
//配置
Config config=new Config();
//之前在这里把redis的ip写死了
config.useSingleServer().setAddress("redis://"+redisHost+":"+"6379");
//创建RedissonClient对象
return Redisson.create(config);
}</code></pre>
Runc 1.1.12版本与Java 1.8.0_291内存报错关系探讨?
https://segmentfault.com/q/1010000044714614
2024-03-15T10:01:38+08:00
2024-03-15T10:01:38+08:00
Nirvana
https://segmentfault.com/u/nirvana_6307339e23d7a
0
<p>最近runc出现漏洞,获取runc(1.1.12版本)直接替换/usr/bin下的runc之后,重启docker之后,启动java镜像(java version "1.8.0_291")出现资源报错</p><h2>There is insufficient memory for the Java Runtime Environment to continue.</h2><h2>Cannot create GC thread. Out of system resources.</h2><h2>An error report file with more information is saved as:</h2><h2>/usr/local/jdk/hs_err_pid7.log</h2><p>求问,runc对java启动产生了那些的影响</p><p>目前看JVM初始化就已经失败,尝试使用5G内存来启动一个java进程也失败,runc的升级是不是影响到了低版本java的启动</p>
CentOS 7虚拟机中Docker容器PHP-FPM无法解析宿主机Nginx PHP脚本请求?
https://segmentfault.com/q/1010000044708195
2024-03-13T11:41:32+08:00
2024-03-13T11:41:32+08:00
carl_
https://segmentfault.com/u/carl_yuki
0
<p>centos7虚拟机docker容器中的php-fpm不能解析宿主机中的nginx的php脚本请求?</p><p>只有php安装在docker中,nginx、mysql安装在虚拟机中。<br>docker创建php容器<br>docker run -d --name php-fpm \<br>-v /usr/local/nginx/html:/var/www/html \<br> -v /docker/php/conf/www.conf:/usr/local/etc/php-fpm.d/www.conf \<br> -v /docker/php/conf/php.ini:/usr/local/etc/php/php.ini \<br> -p 9000:9000 --privileged=true php:7.4-fpm <br>www.conf文件中的监听地址也改为listen = 0.0.0.0:9000,下面是nginx配置文件<br>server {</p><pre><code>listen 80 ;
server_name localhost;
root /var/www/html;
location / {
index index.php index.html ;
}
location ~ \.php$ {
#172.17.0.2为docker中php-fpm容器ip地址
fastcgi_pass 127.0.0.1:9000;//172.17.0.2:9000或0.0.0.0:9000也尝试过
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
</code></pre><p>}<br>docker中php-fpm还是不能解析nginx请求?求告知原因?</p>
mysql5.7 版本 m2 芯片的 docker 镜像 在哪里有?
https://segmentfault.com/q/1010000044699543
2024-03-11T10:37:37+08:00
2024-03-11T10:37:37+08:00
Golang之路
https://segmentfault.com/u/vl39p0w1
0
<p>mysql5.7 版本 m2 芯片的 docker 镜像 在哪里有?</p><p>官网都找不到 只能找maradb了</p>
debian12 如何使用 apt 一键安装 docker-compose V2 版本?
https://segmentfault.com/q/1010000044672851
2024-03-01T14:33:52+08:00
2024-03-01T14:33:52+08:00
ponponon
https://segmentfault.com/u/ponponon
0
<p>debian12 如何使用 apt 一键安装 docker-compose V2 版本?</p><p>仓库里面自带的,还是 python 写的 v1 版本,我要安装 golang 写的 v2 版本</p><pre><code class="shell">ops@es-redis-20240228:~/opt/redis$ sudo apt install docker-compose
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
docker-buildx-plugin docker-compose-plugin libltdl7 libslirp0 pigz slirp4netns
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
binutils binutils-common binutils-x86-64-linux-gnu cgroupfs-mount containerd criu docker.io libbinutils libctf-nobfd0 libctf0 libgprofng0 libintl-perl libintl-xs-perl
libjansson4 libmodule-find-perl libmodule-scandeps-perl libnet1 libnftables1 libnl-3-200 libproc-processtable-perl libprotobuf32 libsort-naturally-perl libterm-readkey-perl
needrestart python3-attr python3-distro python3-distutils python3-docker python3-dockerpty python3-docopt python3-dotenv python3-json-pointer python3-jsonschema python3-lib2to3
python3-protobuf python3-pyrsistent python3-rfc3987 python3-texttable python3-uritemplate python3-webcolors python3-websocket runc tini
Suggested packages:
binutils-doc containernetworking-plugins docker-doc aufs-tools btrfs-progs debootstrap rinse rootlesskit xfsprogs zfs-fuse | zfsutils-linux needrestart-session | libnotify-bin
iucode-tool python-attr-doc python-jsonschema-doc
The following packages will be REMOVED:
containerd.io docker-ce docker-ce-cli docker-ce-rootless-extras
The following NEW packages will be installed:
binutils binutils-common binutils-x86-64-linux-gnu cgroupfs-mount containerd criu docker-compose docker.io libbinutils libctf-nobfd0 libctf0 libgprofng0 libintl-perl
libintl-xs-perl libjansson4 libmodule-find-perl libmodule-scandeps-perl libnet1 libnftables1 libnl-3-200 libproc-processtable-perl libprotobuf32 libsort-naturally-perl
libterm-readkey-perl needrestart python3-attr python3-distro python3-distutils python3-docker python3-dockerpty python3-docopt python3-dotenv python3-json-pointer
python3-jsonschema python3-lib2to3 python3-protobuf python3-pyrsistent python3-rfc3987 python3-texttable python3-uritemplate python3-webcolors python3-websocket runc tini
0 upgraded, 44 newly installed, 4 to remove and 2 not upgraded.
Need to get 75.6 MB of archives.
After this operation, 32.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 file:/etc/apt/mirrors/debian.list Mirrorlist [30 B]
Get:2 https://deb.debian.org/debian bookworm/main amd64 runc amd64 1.1.5+ds1-1+deb12u1 [2710 kB]
Get:3 https://deb.debian.org/debian bookworm/main amd64 containerd amd64 1.6.20~ds1-1+b1 [25.9 MB]
Get:4 https://deb.debian.org/debian bookworm/main amd64 tini amd64 0.19.0-1 [255 kB]
Get:5 https://deb.debian.org/debian bookworm/main amd64 docker.io amd64 20.10.24+dfsg1-1+b3 [36.2 MB]
Get:6 https://deb.debian.org/debian bookworm/main amd64 binutils-common amd64 2.40-2 [2487 kB]
Get:7 https://deb.debian.org/debian bookworm/main amd64 libbinutils amd64 2.40-2 [572 kB]
Get:8 https://deb.debian.org/debian bookworm/main amd64 libctf-nobfd0 amd64 2.40-2 [153 kB]
Get:9 https://deb.debian.org/debian bookworm/main amd64 libctf0 amd64 2.40-2 [89.8 kB]
Get:10 https://deb.debian.org/debian bookworm/main amd64 libgprofng0 amd64 2.40-2 [812 kB]</code></pre>
docker端口映射后,外部可以直接通过宿主机未开启的端口访问到服务的问题?
https://segmentfault.com/q/1010000044669217
2024-02-29T16:14:42+08:00
2024-02-29T16:14:42+08:00
练练的da
https://segmentfault.com/u/mr_q__4_6
0
<p>我的服务器A部署了前端服务, ip为192.168.111.115, 服务器部署有nginx(非docker部署),防火墙开放有80端口以及ftp相关端口,由于前端需要node环境,于是用docker部署了nodejs环境,并在Nginx配置:</p><pre><code>location / {
proxy_pass http://localhost:3000;
}</code></pre><p>然后使用docker run [其它] 3000:3000 [其它] 命令 并未指定网络模式(默认为bridge模式)运行项目<br> 项目可以正常通过192.168.111.115访问,然后一个偶然的机会发现居然可以192.168.111.115:3000也能访问到,</p><h3>于是进行了以下尝试</h3><h4>尝试一:</h4><p>在网上搜了解决方法,是在docker的配置文件daemon.json加入</p><pre><code>{
"iptables": false
}</code></pre><p>从新启动docker,通过docker info发现如下警告:</p><pre><code>WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled</code></pre><p>而且这样操作发现项目访问速度也比原来慢了.<br> 于是重新启用docker的iptables.</p><h4>尝试二:</h4><p>使用服务器firewall-cmd 命令,设置了3000端口的入方向的限制</p><pre><code>firewall-cmd --permanent --add-rich-rule='rule family="ipv4" port protocol="tcp" port="3000" reject'
firewall-cmd --permanent --add-rich-rule='rule family="ipv6" port protocol="tcp" port="3000" reject'</code></pre><p>发现不管用.仍可以通过3000端口访问到.</p><h4>尝试三:</h4><p>尝试在docker run 中设置 --net=host 模式,问题得到解决,但这样就没有网络隔离,安全性差了.</p><p><strong>求助:</strong><br> 有没有什么方法实现, 我即使用bridge模式, 又可以限制外部仅可以通过ip访问到项目,而ip:3000 访问不了?</p>
跪求!!!!请帮忙解决opendkim服务无法启动的问题?
https://segmentfault.com/q/1010000044520317
2024-01-02T14:39:38+08:00
2024-01-02T14:39:38+08:00
hán_xuān
https://segmentfault.com/u/hn_xun
0
<p>opendkim使用命令systemctl restart opendkim后,使用systemctl status opendkim查看后</p><pre><code>[root@92d7446911bd etc]# systemctl status -l opendkim
● opendkim.service - DomainKeys Identified Mail (DKIM) Milter
Loaded: loaded (/usr/lib/systemd/system/opendkim.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Tue 2024-01-02 05:59:19 UTC; 11min ago
Docs: man:opendkim(8)
man:opendkim.conf(5)
man:opendkim-genkey(8)
man:opendkim-genzone(8)
man:opendkim-testadsp(8)
man:opendkim-testkey
http://www.opendkim.org/docs.html
Process: 579 ExecStart=/usr/sbin/opendkim $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 579 (code=exited, status=0/SUCCESS)
Jan 02 05:59:19 92d7446911bd systemd[1]: Started DomainKeys Identified Mail (DKIM) Milter.
</code></pre><p>。发先opendkim状态时dead状态。不管我试多少次,就是还是这个样子<br>使用postfix发送邮件后</p><pre><code>Jan 2 06:29:50 92d7446911bd postfix/smtpd[693]: warning: connect to Milter service inet:localhost:8891: Connection refused</code></pre><p>提示也是这样的。</p><p>我也查了很多资料,但基本都是说没启动,但我使用了systemctl stop/start/restart opendkim我都试过了,还是这样。<br>跪求大神们,帮帮忙。</p>
docker+nginx部署的第二个前端项目返回的不是页面却是一堆后端数据是怎么回事?
https://segmentfault.com/q/1010000044499704
2023-12-25T10:38:07+08:00
2023-12-25T10:38:07+08:00
庭
https://segmentfault.com/u/ting_6583ed85a018b
-2
<p>docker+nginx部署的前端访问成功但就是没有返回页面而是返回了后端一堆数据<br><img width="723" height="490" src="/img/bVdaSy6" alt="image.png" title="image.png"><br><img width="723" height="124" src="/img/bVdaSy7" alt="image.png" title="image.png"><br><img width="723" height="331" src="/img/bVdaSL0" alt="image.png" title="image.png"><br><img width="723" height="121" src="/img/bVdaTdF" alt="image.png" title="image.png"><br><img width="723" height="330" src="/img/bVdaTdH" alt="image.png" title="image.png"><br>怎么修改才能返回页面呢,我这样配置应该没问题把,我前面还配置了一个前端项目,现在配置的是前端的后台 会不会冲突了什么?<br>已经把前端打包放在nginx上去了,第一个dist是已经部署上去了,现在部署的是后台,也已经把包放上去了然后页面不出来的,而且我这个代理felix是后端没有的</p>
是否有 Hibernate 和 Mybatis 以外的第三种选择?
https://segmentfault.com/q/1010000044634400
2024-02-16T18:38:47+08:00
2024-02-16T18:38:47+08:00
ccmjga
https://segmentfault.com/u/ccmjga
-1
<p>最近发现一种基于 Java17、SpringBoot3 和 JOOQ 的现代 Java 技术栈,可以作为 mybatis 和 hibernate 的第三种选择</p><ul><li><a href="https://link.segmentfault.com/?enc=N1G1u2uwMeafW6wIZdvqgQ%3D%3D.1q3BvT4XnFG%2FGs1ayvDzpa9Fkg5kner6can6qp9FE6I%3D" rel="nofollow">https://www.mjga.cc</a></li><li><a href="https://link.segmentfault.com/?enc=jvghDYU86ImHUbqjwWp1XA%3D%3D.bnGcc7G7CCS%2B5T8vOStCfgvztSUJ1iP%2B8P4NnzanSFugr1qwUSL7P%2FfHbL5aBndn" rel="nofollow">https://github.com/ccmjga/mjga-scaffold</a></li></ul><p>一篇关于它的介绍</p><ul><li><a href="https://segmentfault.com/a/1190000044572199">https://segmentfault.com/a/1190000044572199</a></li></ul><p>这个 JOOQ 是一个 CRUD 库,它用起来有点像 C# 的 LINQ</p><p><img src="/img/remote/1460000044572201" alt="" title=""></p><p>关于这第三种选择大家觉得相对 hibernate 和 mybatis 都有什么优势和劣势?查了一下在国外的社区 jooq 倒是挺流行的,国内的话用的不多,但是里面的编程思想可以借鉴。</p><p>另外,建议进大家最好从 <a href="https://link.segmentfault.com/?enc=UhaaO5bgA18LBnCBmjaypg%3D%3D.7b63bWhplog4RqvT%2BXP7CQhY6rtEZGAoijdPSbpX2QY%3D" rel="nofollow">https://www.mjga.cc</a> 上下载代码来使用,这样能够确保获取到最新的代码,Github 上的代码主要作展示用,它的版本可能会落后于从 mjga.cc 下载的内容。</p>
linux 排查网络问题,docker 容器跑的 http 服务可以访问,但是在宿主机运行的http无法从其他机器访问?
https://segmentfault.com/q/1010000044609137
2024-02-01T14:58:19+08:00
2024-02-01T14:58:19+08:00
ponponon
https://segmentfault.com/u/ponponon
0
<p>linux 排查网络问题,docker 容器跑的 http 服务可以访问,但是在宿主机运行的http无法从其他机器访问?</p><pre><code class="shell">╰─➤ docker restart rabbitmq3-management 2 ↵
Error response from daemon: Cannot restart container rabbitmq3-management: driver failed programming external connectivity on endpoint rabbitmq3-management (f6bf8d5245c463e0ccdbfb5340e09d460dea3925124be09c92612a5ee5823c8e): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 15692 -j DNAT --to-destination 172.21.2.2:15692 ! -i br-ea23e34daef4: iptables: No chain/target/match by that name.
(exit status 1))</code></pre><p>之前因为服务器的内存条损坏,然后强制跳过内存条自检,把服务器重新成功了,现在服务器就带病跑在,还没有新的内存条替换</p><pre><code class="python">if __name__ == "__main__":
uvicorn.run(
app='api:app',
host="0.0.0.0",
port=9600,
workers=1,
)
</code></pre><p>但是服务器重启后发现了问题,我在该服务器,跑了一个 fastapi,发现在自己访问自己可以</p><pre><code class="shell">─➤ http -v http://192.168.38.223:9600
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: 192.168.38.223:9600
User-Agent: HTTPie/2.6.0
HTTP/1.1 200 OK
content-length: 25
content-type: application/json
date: Thu, 01 Feb 2024 06:56:05 GMT
server: uvicorn
{
"message": "Hello World"
}</code></pre><p>但是从其他机器访问这个服务器的 fastapi 的 9600 就不行</p><pre><code class="shell">─➤ http -v http://192.168.38.223:9600
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: 192.168.38.223:9600
User-Agent: HTTPie/3.2.2
HTTP/1.1 503 Service Unavailable
Connection: close
Content-Length: 0
Proxy-Connection: close</code></pre><p>但是其他机器访问这个服务的 docker 跑的 http 服务都是可以的</p><p>比如这个机器上用 docker 跑了一个 rabbitmq server,从其他机器访问这个 rabbitmq sever 的 15672 端口是可以的</p><pre><code class="shell">─➤ http -v http://192.168.38.223:15672
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: 192.168.38.223:15672
User-Agent: HTTPie/3.2.2
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 3056
Content-Security-Policy: script-src 'self' 'unsafe-eval' 'unsafe-inline'; object-src 'self'
Content-Type: text/html
Date: Thu, 01 Feb 2024 06:57:12 GMT
Etag: "3550788022"
Keep-Alive: timeout=4
Last-Modified: Thu, 24 Aug 2023 17:56:19 GMT
Proxy-Connection: keep-alive
Server: Cowboy
Vary: origin</code></pre><p>使用 netstat 查看,192.168.38.223 机器的 9600 端口确实是被监听着</p><pre><code class="shell">╰─➤ netstat -tulnp 1 ↵
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:19530 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:2224 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:15692 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8929 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9091 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9002 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9600 0.0.0.0:* LISTEN 1636021/python
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:36672 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:44127 0.0.0.0:* LISTEN 1598742/node
tcp 0 0 127.0.0.1:44359 0.0.0.0:* LISTEN 1598878/code-8b3775
tcp 0 0 127.0.0.1:41939 0.0.0.0:* LISTEN 1598538/node
tcp6 0 0 :::19530 :::* LISTEN -
tcp6 0 0 :::5601 :::* LISTEN -
tcp6 0 0 :::5432 :::* LISTEN -
tcp6 0 0 :::5672 :::* LISTEN -
tcp6 0 0 :::6379 :::* LISTEN -
tcp6 0 0 :::7891 :::* LISTEN 1646/clash
tcp6 0 0 :::7890 :::* LISTEN 1646/clash
tcp6 0 0 :::8000 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::2224 :::* LISTEN -
tcp6 0 0 :::3306 :::* LISTEN -
tcp6 0 0 :::15692 :::* LISTEN -
tcp6 0 0 :::15672 :::* LISTEN -
tcp6 0 0 :::8929 :::* LISTEN -
tcp6 0 0 :::9200 :::* LISTEN -
tcp6 0 0 :::9091 :::* LISTEN -
tcp6 0 0 :::9090 :::* LISTEN 1646/clash
tcp6 0 0 :::9002 :::* LISTEN -
tcp6 0 0 :::9000 :::* LISTEN -
tcp6 0 0 :::9300 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
udp6 0 0 :::7891 :::* 1646/clash </code></pre><p>我的机器(192.168.38.223)网络如下:</p><pre><code class="shell">─➤ ip --color a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 90:8d:6e:c2:5d:24 brd ff:ff:ff:ff:ff:ff
altname enp24s0f0
inet 192.168.38.223/24 brd 192.168.38.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::928d:6eff:fec2:5d24/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 90:8d:6e:c2:5d:25 brd ff:ff:ff:ff:ff:ff
altname enp24s0f1
4: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 90:8d:6e:c2:5d:26 brd ff:ff:ff:ff:ff:ff
altname enp25s0f0
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 90:8d:6e:c2:5d:27 brd ff:ff:ff:ff:ff:ff
altname enp25s0f1
6: br-7abdd021226c: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:20:78:a1:26 brd ff:ff:ff:ff:ff:ff
inet 172.21.7.1/24 brd 172.21.7.255 scope global br-7abdd021226c
valid_lft forever preferred_lft forever
8: br-fae6ff4cbfe5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:a3:e3:7b:47 brd ff:ff:ff:ff:ff:ff
inet 172.21.8.1/24 brd 172.21.8.255 scope global br-fae6ff4cbfe5
valid_lft forever preferred_lft forever
inet6 fe80::42:a3ff:fee3:7b47/64 scope link
valid_lft forever preferred_lft forever
9: br-1ad62c94cb59: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e0:b5:64:9f brd ff:ff:ff:ff:ff:ff
inet 172.21.4.1/24 brd 172.21.4.255 scope global br-1ad62c94cb59
valid_lft forever preferred_lft forever
10: br-72097f53c6c8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:2d:88:79:b3 brd ff:ff:ff:ff:ff:ff
inet 172.21.5.1/24 brd 172.21.5.255 scope global br-72097f53c6c8
valid_lft forever preferred_lft forever
inet6 fe80::42:2dff:fe88:79b3/64 scope link
valid_lft forever preferred_lft forever
11: br-2c578316f047: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:f5:72:f5:5c brd ff:ff:ff:ff:ff:ff
inet 172.21.1.1/24 brd 172.21.1.255 scope global br-2c578316f047
valid_lft forever preferred_lft forever
12: br-33e0a46249f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b6:a2:c1:e3 brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-33e0a46249f7
valid_lft forever preferred_lft forever
13: br-7c40d6bf640c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e7:a5:67:9c brd ff:ff:ff:ff:ff:ff
inet 172.21.3.1/24 brd 172.21.3.255 scope global br-7c40d6bf640c
valid_lft forever preferred_lft forever
inet6 fe80::42:e7ff:fea5:679c/64 scope link
valid_lft forever preferred_lft forever
14: br-ae3a1dd6e320: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:41:e9:55:06 brd ff:ff:ff:ff:ff:ff
inet 172.21.0.1/24 brd 172.21.0.255 scope global br-ae3a1dd6e320
valid_lft forever preferred_lft forever
inet6 fe80::42:41ff:fee9:5506/64 scope link
valid_lft forever preferred_lft forever
15: br-ea23e34daef4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:77:fc:27:bf brd ff:ff:ff:ff:ff:ff
inet 172.21.2.1/24 brd 172.21.2.255 scope global br-ea23e34daef4
valid_lft forever preferred_lft forever
inet6 fe80::42:77ff:fefc:27bf/64 scope link
valid_lft forever preferred_lft forever
16: br-eb248bb5b3fa: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:49:87:4d:ff brd ff:ff:ff:ff:ff:ff
inet 172.21.15.1/24 brd 172.21.15.255 scope global br-eb248bb5b3fa
valid_lft forever preferred_lft forever
inet6 fe80::42:49ff:fe87:4dff/64 scope link
valid_lft forever preferred_lft forever
17: br-0cbe1b0ddf78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:fc:d6:05:b2 brd ff:ff:ff:ff:ff:ff
inet 172.21.9.1/24 brd 172.21.9.255 scope global br-0cbe1b0ddf78
valid_lft forever preferred_lft forever
inet6 fe80::42:fcff:fed6:5b2/64 scope link
valid_lft forever preferred_lft forever
18: br-298fd4684d8e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:7e:14:43:4b brd ff:ff:ff:ff:ff:ff
inet 172.21.17.1/24 brd 172.21.17.255 scope global br-298fd4684d8e
valid_lft forever preferred_lft forever
19: br-3fa489a3f1b3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:37:b1:67:2f brd ff:ff:ff:ff:ff:ff
inet 172.21.10.1/24 brd 172.21.10.255 scope global br-3fa489a3f1b3
valid_lft forever preferred_lft forever
20: br-bff545d104b6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ee:12:b1:2e brd ff:ff:ff:ff:ff:ff
inet 172.21.19.1/24 brd 172.21.19.255 scope global br-bff545d104b6
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fe12:b12e/64 scope link
valid_lft forever preferred_lft forever
21: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:16:5c:70:8e brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
23: vethc4971ff@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default
link/ether 6e:1b:be:ce:63:4f brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::6c1b:beff:fece:634f/64 scope link
valid_lft forever preferred_lft forever
25: vethbb38cd9@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-72097f53c6c8 state UP group default
link/ether 46:af:51:eb:82:5a brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::44af:51ff:feeb:825a/64 scope link
valid_lft forever preferred_lft forever
27: vetha994484@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ea23e34daef4 state UP group default
link/ether 2e:62:df:af:e7:77 brd ff:ff:ff:ff:ff:ff link-netnsid 10
inet6 fe80::2c62:dfff:feaf:e777/64 scope link
valid_lft forever preferred_lft forever
29: vetha936228@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-fae6ff4cbfe5 state UP group default
link/ether ea:9a:37:c2:7a:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::e89a:37ff:fec2:7af9/64 scope link
valid_lft forever preferred_lft forever
31: veth903d616@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7c40d6bf640c state UP group default
link/ether fe:4f:15:d0:24:bb brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::fc4f:15ff:fed0:24bb/64 scope link
valid_lft forever preferred_lft forever
33: veth0fb5941@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ae3a1dd6e320 state UP group default
link/ether da:81:51:b4:6e:ff brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::d881:51ff:feb4:6eff/64 scope link
valid_lft forever preferred_lft forever
35: veth03a943c@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-bff545d104b6 state UP group default
link/ether d6:0c:97:ce:c1:73 brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::d40c:97ff:fece:c173/64 scope link
valid_lft forever preferred_lft forever
39: veth3051cb6@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default
link/ether a2:31:f3:14:e4:42 brd ff:ff:ff:ff:ff:ff link-netnsid 11
inet6 fe80::a031:f3ff:fe14:e442/64 scope link
valid_lft forever preferred_lft forever
41: veth90b7282@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default
link/ether 5e:b6:3c:e7:8e:52 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::5cb6:3cff:fee7:8e52/64 scope link
valid_lft forever preferred_lft forever
43: vethb1255cd@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-fae6ff4cbfe5 state UP group default
link/ether 66:81:8d:a6:b2:54 brd ff:ff:ff:ff:ff:ff link-netnsid 8
inet6 fe80::6481:8dff:fea6:b254/64 scope link
valid_lft forever preferred_lft forever
45: veth08c2693@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default
link/ether c6:a5:cb:0e:0f:2a brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::c4a5:cbff:fe0e:f2a/64 scope link
valid_lft forever preferred_lft forever
6217: vethe2ecf76@if6216: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-eb248bb5b3fa state UP group default
link/ether 16:6f:0a:c6:7c:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::146f:aff:fec6:7cf2/64 scope link
valid_lft forever preferred_lft forever</code></pre><p>这该怎么办呢?有什么排查思路或者方向吗?</p><p>之前都是可以访问的,就是内存出问题重启后出现了这个问题。</p><p>不仅仅是 9600端口,我把 fastapi 改成其他端口都是不通的。甚至我把 docker 跑的 rabbitmq 关掉,释放 15672 端口,然后把 fastapi 绑定到 15672 端口,这是从其他电脑也无法访问 15672 了。(但是 docker 跑的 rabbitmq 的 15672 是可以被其他机器访问的)</p><hr><p>使用 nc 命令在我的 mac 上判断服务器(192.168.38.223)端口是否联通,会返回连接拒绝</p><pre><code class="shell">╰─➤ nc -zv 192.168.38.223 9600 130 ↵
nc: connectx to 192.168.38.223 port 9600 (tcp) failed: Connection refused</code></pre><p>但是使用 httpie 命令,返回的还是 503</p><pre><code class="shell">╰─➤ http -v http://192.168.38.223:9600
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: 192.168.38.223:9600
User-Agent: HTTPie/3.2.2
HTTP/1.1 503 Service Unavailable
Connection: close
Content-Length: 0
Proxy-Connection: close</code></pre><p>但是在服务器自己访问自己都是 ok 的</p><pre><code class="shell">╭─pon@T4GPU ~
╰─➤ nc -zv 192.168.38.223 9600
Connection to 192.168.38.223 9600 port [tcp/*] succeeded!
╭─pon@T4GPU ~
╰─➤ http -v http://192.168.38.223:9600
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: 192.168.38.223:9600
User-Agent: HTTPie/2.6.0
HTTP/1.1 200 OK
content-length: 25
content-type: application/json
date: Fri, 02 Feb 2024 01:39:17 GMT
server: uvicorn
{
"message": "Hello World"
}</code></pre><hr><p>防火墙关闭了还是不行</p><pre><code class="shell">╭─pon@T4GPU ~
╰─➤ sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
╭─pon@T4GPU ~
╰─➤ exit
Connection to 192.168.38.223 closed.
╭─ponponon@MBP13ARM ~
╰─➤ nc -zv 192.168.38.223 9600
nc: connectx to 192.168.38.223 port 9600 (tcp) failed: Connection refused</code></pre><hr><p>我同样在 mac(192.168.35.150) 上去访问另一台服务器(192.168.38.191)的 fastapi,是可以的</p><pre><code class="shell">╭─ponponon@MBP13ARM ~
╰─➤ nc -zv 192.168.38.191 9901 1 ↵
Connection to 192.168.38.191 port 9901 [tcp/*] succeeded!
╭─ponponon@MBP13ARM ~
╰─➤ http -v http://192.168.38.191:9901
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: 192.168.38.191:9901
User-Agent: HTTPie/3.2.2
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 25
Content-Type: application/json
Date: Fri, 02 Feb 2024 01:50:07 GMT
Keep-Alive: timeout=4
Proxy-Connection: keep-alive
Server: uvicorn
{
"message": "hello world"
}</code></pre><p>所以应该不是外部网络的问题</p><hr><p>之前都是好好的</p><p>现在是下面这样</p><p><img width="723" height="340" src="/img/bVdblEf" alt="未命名文件(81).png" title="未命名文件(81).png"></p><hr><p>更新 192.168.38.223 机器的路由表信息</p><pre><code class="shell">(vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*›
╰─➤ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.38.1 0.0.0.0 UG 0 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.21.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ae3a1dd6e320
172.21.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-2c578316f047
172.21.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ea23e34daef4
172.21.3.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7c40d6bf640c
172.21.4.0 0.0.0.0 255.255.255.0 U 0 0 0 br-1ad62c94cb59
172.21.5.0 0.0.0.0 255.255.255.0 U 0 0 0 br-72097f53c6c8
172.21.7.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7abdd021226c
172.21.8.0 0.0.0.0 255.255.255.0 U 0 0 0 br-fae6ff4cbfe5
172.21.9.0 0.0.0.0 255.255.255.0 U 0 0 0 br-0cbe1b0ddf78
172.21.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br-3fa489a3f1b3
172.21.15.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eb248bb5b3fa
172.21.17.0 0.0.0.0 255.255.255.0 U 0 0 0 br-298fd4684d8e
172.21.19.0 0.0.0.0 255.255.255.0 U 0 0 0 br-bff545d104b6
192.168.38.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1
192.168.49.0 0.0.0.0 255.255.255.0 U 0 0 0 br-33e0a46249f7
(vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*›
╰─➤ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default localhost 0.0.0.0 UG 0 0 0 eno1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.21.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ae3a1dd6e320
172.21.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-2c578316f047
172.21.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ea23e34daef4
172.21.3.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7c40d6bf640c
172.21.4.0 0.0.0.0 255.255.255.0 U 0 0 0 br-1ad62c94cb59
172.21.5.0 0.0.0.0 255.255.255.0 U 0 0 0 br-72097f53c6c8
172.21.7.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7abdd021226c
172.21.8.0 0.0.0.0 255.255.255.0 U 0 0 0 br-fae6ff4cbfe5
172.21.9.0 0.0.0.0 255.255.255.0 U 0 0 0 br-0cbe1b0ddf78
172.21.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br-3fa489a3f1b3
172.21.15.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eb248bb5b3fa
172.21.17.0 0.0.0.0 255.255.255.0 U 0 0 0 br-298fd4684d8e
172.21.19.0 0.0.0.0 255.255.255.0 U 0 0 0 br-bff545d104b6
192.168.38.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1
192.168.49.0 0.0.0.0 255.255.255.0 U 0 0 0 br-33e0a46249f7
(vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*›
╰─➤ ip -s route show
default via 192.168.38.1 dev eno1 proto static
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.21.0.0/24 dev br-ae3a1dd6e320 proto kernel scope link src 172.21.0.1
172.21.1.0/24 dev br-2c578316f047 proto kernel scope link src 172.21.1.1 linkdown
172.21.2.0/24 dev br-ea23e34daef4 proto kernel scope link src 172.21.2.1
172.21.3.0/24 dev br-7c40d6bf640c proto kernel scope link src 172.21.3.1
172.21.4.0/24 dev br-1ad62c94cb59 proto kernel scope link src 172.21.4.1 linkdown
172.21.5.0/24 dev br-72097f53c6c8 proto kernel scope link src 172.21.5.1
172.21.7.0/24 dev br-7abdd021226c proto kernel scope link src 172.21.7.1 linkdown
172.21.8.0/24 dev br-fae6ff4cbfe5 proto kernel scope link src 172.21.8.1
172.21.9.0/24 dev br-0cbe1b0ddf78 proto kernel scope link src 172.21.9.1
172.21.10.0/24 dev br-3fa489a3f1b3 proto kernel scope link src 172.21.10.1 linkdown
172.21.15.0/24 dev br-eb248bb5b3fa proto kernel scope link src 172.21.15.1
172.21.17.0/24 dev br-298fd4684d8e proto kernel scope link src 172.21.17.1 linkdown
172.21.19.0/24 dev br-bff545d104b6 proto kernel scope link src 172.21.19.1
192.168.38.0/24 dev eno1 proto kernel scope link src 192.168.38.223
192.168.49.0/24 dev br-33e0a46249f7 proto kernel scope link src 192.168.49.1 linkdown
</code></pre><p>然后我在问题机器上抓包</p><pre><code class="shell">(vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*›
╰─➤ sudo tcpdump -i eno1 port 9600 -n -vvv -w test.cap 130 ↵
tcpdump: listening on eno1, link-type EN10MB (Ethernet), snapshot length 262144 bytes</code></pre><p>然后在 mac 上打开服务器抓包的 cap 文件,结果如下</p><p><img width="723" height="411" src="/img/bVdblJO" alt="图片.png" title="图片.png"></p><hr><p>我直接用 mac 上的 wireshark 抓包了试了一下,变成下面这样了</p><pre><code class="shell">╰─➤ http -v http://192.168.38.223:9600
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Host: 192.168.38.223:9600
User-Agent: HTTPie/3.2.2
HTTP/1.1 503 Service Unavailable
Connection: close
Content-Length: 0
Proxy-Connection: close</code></pre><p><img width="723" height="200" src="/img/bVdblNM" alt="图片.png" title="图片.png"></p><hr><p>监听的端口没有问题</p><pre><code class="shell">╰─➤ netstat -tulnp | grep 2320406
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:9600 0.0.0.0:* LISTEN 2320406/python</code></pre><hr><p>更新 ifconfig eno1 的结果</p><pre><code class="shell">─➤ ifconfig eno1
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.38.223 netmask 255.255.255.0 broadcast 192.168.38.255
inet6 fe80::928d:6eff:fec2:5d24 prefixlen 64 scopeid 0x20<link>
ether 90:8d:6e:c2:5d:24 txqueuelen 1000 (Ethernet)
RX packets 1912389 bytes 541910038 (541.9 MB)
RX errors 0 dropped 48496 overruns 0 frame 0
TX packets 1097342 bytes 510909874 (510.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 35 </code></pre><hr><p>更新 ethtool -S eno1 的结果</p><pre><code class="shell">╰─➤ ethtool -S eno1
NIC statistics:
rx_octets: 541948234
rx_fragments: 0
rx_ucast_packets: 995624
rx_mcast_packets: 677808
rx_bcast_packets: 239396
rx_fcs_errors: 0
rx_align_errors: 0
rx_xon_pause_rcvd: 0
rx_xoff_pause_rcvd: 0
rx_mac_ctrl_rcvd: 0
rx_xoff_entered: 0
rx_frame_too_long_errors: 0
rx_jabbers: 0
rx_undersize_packets: 0
rx_in_length_errors: 0
rx_out_length_errors: 0
rx_64_or_less_octet_packets: 0
rx_65_to_127_octet_packets: 0
rx_128_to_255_octet_packets: 0
rx_256_to_511_octet_packets: 0
rx_512_to_1023_octet_packets: 0
rx_1024_to_1522_octet_packets: 0
rx_1523_to_2047_octet_packets: 0
rx_2048_to_4095_octet_packets: 0
rx_4096_to_8191_octet_packets: 0
rx_8192_to_9022_octet_packets: 0
tx_octets: 511129734
tx_collisions: 0
tx_xon_sent: 0
tx_xoff_sent: 0
tx_flow_control: 0
tx_mac_errors: 0
tx_single_collisions: 0
tx_mult_collisions: 0
tx_deferred: 0
tx_excessive_collisions: 0
tx_late_collisions: 0
tx_collide_2times: 0
tx_collide_3times: 0
tx_collide_4times: 0
tx_collide_5times: 0
tx_collide_6times: 0
tx_collide_7times: 0
tx_collide_8times: 0
tx_collide_9times: 0
tx_collide_10times: 0
tx_collide_11times: 0
tx_collide_12times: 0
tx_collide_13times: 0
tx_collide_14times: 0
tx_collide_15times: 0
tx_ucast_packets: 1097937
tx_mcast_packets: 83
tx_bcast_packets: 9
tx_carrier_sense_errors: 0
tx_discards: 0
tx_errors: 0
dma_writeq_full: 0
dma_write_prioq_full: 0
rxbds_empty: 0
rx_discards: 0
rx_errors: 0
rx_threshold_hit: 0
dma_readq_full: 0
dma_read_prioq_full: 0
tx_comp_queue_full: 0
ring_set_send_prod_index: 0
ring_status_update: 0
nic_irqs: 0
nic_avoided_irqs: 0
nic_tx_threshold_hit: 0
mbuf_lwm_thresh_hit: 0</code></pre><hr><p>这是我的 cpu 信息</p><pre><code class="shell">─➤ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 7
CPU max MHz: 3900.0000
CPU min MHz: 1000.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shad
ow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local
dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 1 MiB (32 instances)
L1i: 1 MiB (32 instances)
L2: 32 MiB (32 instances)
L3: 44 MiB (2 instances)
NUMA:
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerabilities:
Gather data sampling: Mitigation; Microcode
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Retbleed: Mitigation; Enhanced IBRS
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Srbds: Not affected
Tsx async abort: Mitigation; TSX disabled</code></pre><p>内存应该是够的, 有 100多 GB 可用</p><pre><code class="shell">(poster_keyword_search-vs4TvrqN) ╭─pon@T4GPU ~/code/work/vobile/vt/poster_keyword_search ‹master›
╰─➤ free -h 2 ↵
total used free shared buff/cache available
Mem: 125Gi 17Gi 102Gi 130Mi 5.9Gi 107Gi
Swap: 8.0Gi 0B 8.0Gi</code></pre>
MySQL 8.2 为何无法远程登录?
https://segmentfault.com/q/1010000044608752
2024-02-01T13:25:58+08:00
2024-02-01T13:25:58+08:00
MonkeyDLaDen
https://segmentfault.com/u/monkeydladen
0
<p>mysql8.2无法远程登录,MySQL是通过docker启动的,启动正常提示:<br><img width="711" height="129" src="/img/bVdbkVX" alt="image.png" title="image.png"><br>尝试了网上的解决方案,<br>1.在服务器登录之后查看user表的root用户有两个,一个host为%一个为localhost,执行删除命令删除了localhost的root,重启MySQL 重试无效<br>2.在配置文件增加skip-name-resolve 打开配置文件发现已经有了该配置 重启MySQL 无效<br>3.在本地telnet 3306端口是通的</p>
nodejs项目使用docker部署为何失败?
https://segmentfault.com/q/1010000044604062
2024-01-30T23:31:56+08:00
2024-01-30T23:31:56+08:00
MonkeyDLaDen
https://segmentfault.com/u/monkeydladen
0
<p>nodejs使用thinkjs开发的项目docker-compose部署失败</p><p>我在用docker-compose部署一个node.js项目时遇见的无法部署,该项目之前使用pm2直接部署在宿主机是可以正常运行的,项目使用了thinkjs框架,下面的是我的docker-compose文件:</p><pre><code>services:
node:
image: node:18.17.1
volumes:
- /data/jxxw/node/h5-api/:/data/jxxw/node
working_dir: /data/jxxw/node
command: npm start
ports:
- "8362:8362"</code></pre><p>h5-api是我把node项目解压之后的文件夹,执行docker-compose up -d之后发现容器未启动,查看logs获得如下提示:</p><pre><code>> jxxw-api@1.0.0 start
> node production.js
node:internal/modules/cjs/loader:1080
throw err;
^
Error: Cannot find module 'thinkjs'
Require stack:
- /data/jxxw/node/production.js
at Module._resolveFilename (node:internal/modules/cjs/loader:1077:15)
at Module._load (node:internal/modules/cjs/loader:922:27)
at Module.require (node:internal/modules/cjs/loader:1143:19)
at require (node:internal/modules/cjs/helpers:121:18)
at Object.<anonymous> (/data/jxxw/node/production.js:2:21)
at Module._compile (node:internal/modules/cjs/loader:1256:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
at Module.load (node:internal/modules/cjs/loader:1119:32)
at Module._load (node:internal/modules/cjs/loader:960:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) {
code: 'MODULE_NOT_FOUND',
requireStack: [ '/data/jxxw/node/production.js' ]
}
Node.js v18.17.1
</code></pre><p>看提示是未读取到thinkjs的依赖,据node开发同学反馈他的打包流程如下:<br>1.npm install<br>2.npm pack<br>之后就把生成的tgz文件给到我这边,<br>我尝试过写dockerfile文件解决,如下:</p><pre><code># 使用官方 Node.js 镜像
FROM node:18.17.1
# 设置工作目录
WORKDIR /data/jxxw/node
# 将当前目录下的所有内容都复制到位于 /app 的容器中
COPY . /data/jxxw/node
# 安装项目依赖
RUN npm install
# 暴露端口,供外部访问
EXPOSE 8362</code></pre><p>提示install失败:</p><pre><code> => ERROR [4/4] RUN npm install 1542.3s
------
> [4/4] RUN npm install:
1542.3 npm ERR! code ECONNREFUSED
1542.3 npm ERR! syscall connect
1542.3 npm ERR! errno ECONNREFUSED
1542.3 npm ERR! FetchError: request to https://registry.npmjs.org/ava failed, reason: connect ECONNREFUSED 104.16.28.34:443
1542.3 npm ERR! at ClientRequest.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/minipass-fetch/lib/index.js:130:14)
1542.3 npm ERR! at ClientRequest.emit (node:events:514:28)
1542.3 npm ERR! at TLSSocket.socketErrorListener (node:_http_client:501:9)
1542.3 npm ERR! at TLSSocket.emit (node:events:526:35)
1542.3 npm ERR! at emitErrorNT (node:internal/streams/destroy:151:8)
1542.3 npm ERR! at emitErrorCloseNT (node:internal/streams/destroy:116:3)
1542.3 npm ERR! at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
1542.3 npm ERR! FetchError: request to https://registry.npmjs.org/ava failed, reason: connect ECONNREFUSED 104.16.28.34:443
1542.3 npm ERR! at ClientRequest.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/minipass-fetch/lib/index.js:130:14)
1542.3 npm ERR! at ClientRequest.emit (node:events:514:28)
1542.3 npm ERR! at TLSSocket.socketErrorListener (node:_http_client:501:9)
1542.3 npm ERR! at TLSSocket.emit (node:events:526:35)
1542.3 npm ERR! at emitErrorNT (node:internal/streams/destroy:151:8)
1542.3 npm ERR! at emitErrorCloseNT (node:internal/streams/destroy:116:3)
1542.3 npm ERR! at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
1542.3 npm ERR! code: 'ECONNREFUSED',
1542.3 npm ERR! errno: 'ECONNREFUSED',
1542.3 npm ERR! syscall: 'connect',
1542.3 npm ERR! address: '104.16.28.34',
1542.3 npm ERR! port: 443,
1542.3 npm ERR! type: 'system',
1542.3 npm ERR! requiredBy: '.'
1542.3 npm ERR! }
1542.3 npm ERR!
1542.3 npm ERR! If you are behind a proxy, please make sure that the
1542.3 npm ERR! 'proxy' config is set properly. See: 'npm help config'
1542.3
1542.3 npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2024-01-30T14_55_34_130Z-debug-0.log
------
Dockerfile:8
--------------------
6 | COPY . /data/jxxw/node
7 | # 安装项目依赖
8 | >>> RUN npm install
9 | # 暴露 8000 端口,供外部访问
10 | EXPOSE 8362
--------------------
ERROR: failed to solve: process "/bin/sh -c npm install" did not complete successfully: exit code: 1
ERROR: Service 'node' failed to build : Build failed
</code></pre><p>文件夹里面的内容:<br><img width="623" height="46" src="/img/bVdbkVI" alt="image.png" title="image.png"><br><img width="337" height="21" src="/img/bVdbkVK" alt="image.png" title="image.png"></p><p>麻烦大佬指点为何,个人理解是thinkjs没有被正确的install,如果我想只写docker-compose文件不写dockerfile文件应该如何操作</p>
docker容器日志能否设置存储到别的路径?
https://segmentfault.com/q/1010000044592398
2024-01-26T17:01:47+08:00
2024-01-26T17:01:47+08:00
练练的da
https://segmentfault.com/u/mr_q__4_6
0
<p>使用docker运行了php容器, nginx容器 <br>通常查看日志都是使用docker logs php或nginx<br>发现默认的这些日志是存储在容器目录中的<br>如 "/var/lib/docker/containors/php/php.json" "/var/lib/docker/containors/nginx/nginx.json"<br>有什么办法能让日志统一存储在别的位置,<br>如"/home/logs" 目录下?</p>
gitlab迁移到docker同时恢复备份失败应该如何解决?
https://segmentfault.com/q/1010000044592846
2024-01-26T21:48:42+08:00
2024-01-26T21:48:42+08:00
MonkeyDLaDen
https://segmentfault.com/u/monkeydladen
0
<p>最近公司要把在云服务器的代码仓库迁移到内网<br>云服务器的gitlab是直接运行的,在内网要求使用docker部署,两边的版本都是gitlab ce 16.6.2<br>首先:我在云服务器通过命令生成 了备份压缩包:</p><pre><code>gitlab-rake gitlab:backup:create
</code></pre><p>接着把压缩包下载到内网服务器,使用docker-compose运行gitlab。<br>docker-compose.yml</p><pre><code>version: '3'
services:
gitlab:
container_name: gitlab
image: gitlab/gitlab-ce:16.6.2-ce.0
restart: always
ports:
- "80:80"
environment:
- TZ=Asia/Shanghai
volumes:
- /data/gitlab/config:/etc/gitlab
- /data/gitlab/logs:/var/log/gitlab
- /data/gitlab/data:/var/opt/gitlab
networks:
- gitlab_network
networks:
gitlab_network:
driver: bridge
</code></pre><p>容器成功启动,并且可以正常访问<br>复制 gitlab-secrets.json和gitlab.rb文件到/data/gitlab/config,重启容器生效<br>然后复制备份文件压缩包1706197160_2024_01_25_16.6.2_gitlab_backup.tar到/data/gitlab/data/backups<br>进入容器查看git用户的uid,之后给压缩包赋所有权到git用户<br>之后执行</p><pre><code>gitlab-rake gitlab:backup:restore BACKUP=1706197160_2024_01_25_16.6.2
</code></pre><p>提示如下:</p><pre><code>2024-01-26 11:58:25 UTC -- Unpacking backup ...
tar: Skipping to next header
tar: Skipping to next header
tar: Skipping to next header
tar: A lone zero block at 6596142
tar: Exiting with failure status due to previous errors
2024-01-26 11:58:29 UTC -- Unpacking backup failed
2024-01-26 11:58:29 UTC -- Deleting backup and restore PID file ... done
</code></pre><p>查看/data/gitlab/logs/gitlab-rails下面的backup_json.log:</p><pre><code>{"severity":"INFO","time":"2024-01-26T11:58:25.470Z","correlation_id":null,"message":"Unpacking backup ... "}
{"severity":"INFO","time":"2024-01-26T11:58:29.994Z","correlation_id":null,"message":"Unpacking backup failed"}
</code></pre><p>我不知道从哪里可以获取更多更详细的错误信息,请告诉我一下,我去获取更详细的错误信息。或者有遇见过相同问题的可以麻烦解答一下,感谢</p>
接口自动化测试项目报[Errno 99] Cannot assign requested address)?
https://segmentfault.com/q/1010000044589957
2024-01-25T18:58:56+08:00
2024-01-25T18:58:56+08:00
lhl02531
https://segmentfault.com/u/lvhaoliang
0
<p><strong>背景:</strong><br>我正在做一个接口自动化测试项目,使用到 docker + + jenkins + pytest 等, docker 中有三个容器,1是 jenkins,2是 mariadb 容器, 3是 jenkins 容器拉代码生成的 python 容器</p><p><strong>希望实现的步骤是:</strong>:<br>代码上传到 github ==> docker jenkins 容器拉下来, jekins 运行项目中的 build.sh 脚本, 启动 python 容器读取 db 容器的数据,执行后生成报告并上传。</p><p><strong>现在遇到的问题是:</strong><br>卡在了读取 db 容器这一步, 报错了</p><pre><code>/usr/local/lib/python3.12/site-packages/pymysql/connections.py:352: in __init__
self.connect()
/usr/local/lib/python3.12/site-packages/pymysql/connections.py:668: in connect
raise exc
E pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'localhost' ([Errno 99] Cannot assign requested address)")</code></pre><p><a href="https://link.segmentfault.com/?enc=mRyrmNoH3%2B1SVCKUnkxVjg%3D%3D.vkPVarlsk5VWwb9OcxKMxqn5IYder99U1yOGsHMP7SoHjeOIhCaEKXhfBikvORp1" rel="nofollow">https://github.com/lhl02531/python-auto-test</a> 测试代码仓库<br><a href="https://link.segmentfault.com/?enc=Azmd89%2FZTaYv96lU5%2FRF8Q%3D%3D.9D1pNKx3LvC7q6j1Zm3f3vJFqFVhAMznCARvWA%2FtlD%2By5QKFhH30P3Al%2B0B0sgBa" rel="nofollow">http://testingpai.com/article/1644570535388</a> 我参考这个 docker + jenkins 自动化教程<br><strong>第一次问问题,有什么遗漏的地方请告诉我, 谢谢</strong></p><p>做过的尝试:</p><ol><li>怀疑是创建的 python 容器不能连接 db 容器,所以我又创建了 python 容器测试我的代码, 可是这个容器能连接上 db 容器, 也能正常执行 sql 语句</li><li><p>怀疑是 docker 网络模式,</p><pre><code> docker network ls
NETWORK ID NAME DRIVER SCOPE
1b86530159c4 bridge bridge local
97c9da1a063a host host local
9bda0cd1c26c none null local
docker network inspect 1b86530159c4
[
{
"Name": "bridge",
"Id": "1b86530159c426c2dcc9f0b85ea032950e50935039378a8040c6bfc474761afa",
"Created": "2024-01-24T11:36:19.039842097+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2a653a1372fa4d2cfabde0d6f60d074d061cb5f512a90981be44c5bd3fb322e9": {
"Name": "p2",
"EndpointID": "c7733bbd68a089db6f6ed19d01255c75ab7a2374cb55a2f5dc0b52f658238239",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"35c927b9ce8aa00b5f42b15c56d812bfef0608ed0f0ac339da7fc6dc499c20ef": {
"Name": "jenkins-4",
"EndpointID": "e46f91261633e52cf0c07996be2f40a1a1535a58cbebdcdc044290580392d6a0",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"97b8d1d6db32cb7beb001063e73c34b45b014c87071131e931bfcda76124259b": {
"Name": "mariadb",
"EndpointID": "9cd7e596d784b718163198ada61da12c0ff5fc1d3247860ddad3e54d724fc88d",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"d509d6c92cd9847e1c3b69a8814fcf5291b673736dc5c63ec649b6c616b84c3b": {
"Name": "p1",
"EndpointID": "d60b193c09f7b951ccd434f12ae731c4363ea1305569244b5a9268805c340916",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]</code></pre></li><li>怀疑是代码的数据库配置问题, 修改了好多次数据库 host 配置, 试过 <code>host:mariadb</code>,<code>host:localhost</code>, <code>host:172.17.0.2</code>,</li><li><p>怀疑是连接的端口太多,尝试写了个数据库单例模式,</p><pre><code class="python">import os
import pymysql
import yaml
from utils.singleton import singleton
# db 连接,db/db.py
@singleton
class DB:
def __init__(self):
print('初始化')
# 读取配置文件
current_path = os.getcwd()
with open( current_path + "/config/db.yaml", "r") as f:
data = yaml.safe_load(f)
# # 打开数据库连接
self.db = pymysql.connect(**data)
# 数据库执行 SQL
def executeSql(self, SQL):
# 使用 execute() 执行 SQL 查询
cursor = self.db.cursor()
cursor.execute(SQL)
# 获取所有记录列表
results = cursor.fetchall()
return results
# 数据库连接关闭
def close(self):
print('关闭数据库')
self.db.close()
db = DB()
# utils/singleton.py
def singleton(cls):
instance = {}
def _singleton(*args, **kwargs):
if cls not in instance:
instance[cls] = cls(*args, **kwargs)
return instance[cls]
return _singleton</code></pre></li><li>怀疑是识别不出来 hostname, 所以尝试修改了一下容器里的 <code>/etc/host</code>,添加<code>172.17.0.2 mariadb 97b8d1d6db32</code>, jenkins 容器可以 ping 通 <code>ping 172.17.0.2</code>也可以<code>ping mariadb</code></li></ol>
使用docker部署lnmp环境的时候,日志问题如何解决?
https://segmentfault.com/q/1010000044586616
2024-01-24T18:11:37+08:00
2024-01-24T18:11:37+08:00
练练的da
https://segmentfault.com/u/mr_q__4_6
0
<p>准备使用容器技术部署php,redis环境<br>php镜像: php-7.4.3-fpm<br>redis镜像: 6.2.14</p><p>php容器中发现运行php进程的是www-data用户, uid 33 , gid 33<br>redis容器中发现运行进程的是 redis 用户, uid 999, gid 999</p><p>想将php的error_log 和 slowlog 进行持久化到宿主机 /home/logs目录下, 即php-fpm.log,slow.log<br>redis的log也持久化到/home/logs目录下, 即 redis.log</p><p>宿主机上都没有www-data和redis用户,文件读写权限的问题如何处理?</p>
Jenkins在Windows容器中调用Docker命令的配置方法?
https://segmentfault.com/q/1010000044583320
2024-01-23T21:17:02+08:00
2024-01-23T21:17:02+08:00
陆秋之
https://segmentfault.com/u/huayue_5c1215d6e5cd7
0
<h3>请问运行在容器内的Jenkins如何配置可以调用docker命令?宿主机是Windows系统。</h3><p>首先排除制作一个带有docker的Jenkins的镜像,这个办法有点老套。</p><p>尝试了一下docker in docker,但是貌似这个方案是基于Linux的。</p><p>在Windows上docker本身已经运行于wsl这类的虚拟机中了,如何再映射卷呢?</p><p>尝试过docker in docker,但是-v /var/run/docker.sock:/var/run/docker.sock这种命令在Windows的docker无法使用。</p>
docker-compose.yml文件部署MySQL时出现错误?
https://segmentfault.com/q/1010000044575387
2024-01-22T00:00:32+08:00
2024-01-22T00:00:32+08:00
MonkeyDLaDen
https://segmentfault.com/u/monkeydladen
0
<p>在Ubuntu中通过docker-compose部署MySQL时无法部署;docker-compose.yml文件如下:</p><pre><code>version: '3'
services:
mysql:
restart: always
image: mysql:8.2
network_mode: bridge
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: root123
command:
--character-set-server=utf8mb4
--collation-server=utf8mb4_general_ci
--explicit_defaults_for_timestamp=true
--lower_case_table_names=1
--max_allowed_packet=128M
volumes:
- /etc/localtime:/etc/localtime:ro
- /data/docker/mysql/mysql:/var/lib/mysql
- /data/docker/mysql/etc:/etc/mysql:ro
ports:
- 3306:3306</code></pre><p>我的步骤如下:<br>1.注释挂载,直接启动mysql 成功<br>2.复制容器中的/var/lib/mysql到宿主机我希望挂载的地方<br>3.复制/etc/mysql ,然后编写my.cnf放到宿主机/data/docker/mysql/etc/mysql/conf.d下<br>4.删除镜像<br>5.打开注释<br>错误如下:</p><pre><code>2024-01-21 12:00:35+08:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.2.0-1.el8 started.
2024-01-21 12:00:35+08:00 [ERROR] [Entrypoint]: mysqld failed while attempting to check config
command was: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci --explicit_defaults_for_timestamp=true --lower_case_table_names=1 --max_allowed_packet=128M --verbose --help --log-bin-index=/tmp/tmp.5xbHMoySK8
mysqld: Can't read dir of '/etc/mysql/conf.d/' (OS errno 2 - No such file or directory)
mysqld: [ERROR] Stopped processing the 'includedir' directive in file /etc/my.cnf at line 36.
mysqld: [ERROR] Fatal error in defaults handling. Program aborted!</code></pre>
M1Pro系统下Docker Desktop更换国内资源镜像不生效?
https://segmentfault.com/q/1010000044569831
2024-01-19T11:36:20+08:00
2024-01-19T11:36:20+08:00
momo
https://segmentfault.com/u/xiaolinjolly
0
<h2>Docker Desktop更换镜像源不生效,如何排查并修复问题?</h2><p>操作系统:Mac Sonoma 14.2<br>处理器芯片:Apple M1 Pro<br>Docker Desktop:24.0.7<br>系统架构:Arm</p><h3>问题概述</h3><p><strong>Mac版本下国内换源不生效,拉取镜像失败。</strong></p><h4>环境细节</h4><p>更换Docker Engine设置中的镜像资源:<br><img width="723" height="414" src="/img/bVdbaLB" alt="image.png" title="image.png"></p><p>查看docker基本信息</p><pre><code>➜ docker info
Client:
Version: 24.0.7
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.12.0-desktop.2
Path: /Users/xlxing/.docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.23.3-desktop.2
Path: /Users/xlxing/.docker/cli-plugins/docker-compose
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.0
Path: /Users/xlxing/.docker/cli-plugins/docker-dev
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.21
Path: /Users/xlxing/.docker/cli-plugins/docker-extension
feedback: Provide feedback, right in your terminal! (Docker Inc.)
Version: 0.1
Path: /Users/xlxing/.docker/cli-plugins/docker-feedback
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v0.1.0-beta.10
Path: /Users/xlxing/.docker/cli-plugins/docker-init
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /Users/xlxing/.docker/cli-plugins/docker-sbom
scan: Docker Scan (Docker Inc.)
Version: v0.26.0
Path: /Users/xlxing/.docker/cli-plugins/docker-scan
scout: Docker Scout (Docker Inc.)
Version: v1.2.0
Path: /Users/xlxing/.docker/cli-plugins/docker-scout
Server:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 20
Server Version: 24.0.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f
runc version: v1.1.10-0-g18a0cb0
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 6.5.11-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 10
Total Memory: 7.76GiB
Name: docker-desktop
ID: cc090978-516d-411f-9df5-6b61b0ded897
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 40
Goroutines: 60
System Time: 2024-01-19T03:13:43.620773759Z
EventsListeners: 9
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
Live Restore Enabled: false
WARNING: daemon is not using the default seccomp profile</code></pre><p>拉取镜像失败示范:<br><img width="723" height="72" src="/img/bVdbaLs" alt="image.png" title="image.png"></p><h4>一些猜测</h4><ol><li>Docker Desktop下的配置信息不在daemon.json中,不存在该文件。<br>注意到在使用<code>docker info</code>指令查看基本环境时,最下面一行显示:<br><strong>WARNING: daemon is not using the default seccomp profile</strong><br>我的腾讯云服务器中安装的Docker,可正常使用,配置文件为:<br><img width="723" height="197" src="/img/bVdbaMp" alt="image.png" title="image.png"></li><li>M1Pro为Arm架构,国内源不支持。</li></ol><p><a href="https://link.segmentfault.com/?enc=%2F8vgCA2qPOX53kLdS%2Bpdng%3D%3D.0RTAlKlSzfV5HoHgnfaolaWOnyMfHoAfr05UZbPuWe2d%2F9%2FCBoSqlXviglJdGz8QH6vbO6i7ndk4eVJp6P3DMQ%3D%3D" rel="nofollow">解决修改docker源无用的问题</a><br>Docker换源的基本步骤:</p><ol><li>换源中科大 阿里云镜像</li><li>重启docker</li><li>查看更新的配置是否生效。</li></ol>
如何限制云服务器上部署的网站只能在公司内部访问?
https://segmentfault.com/q/1010000044555649
2024-01-15T16:47:43+08:00
2024-01-15T16:47:43+08:00
一个凹
https://segmentfault.com/u/yigeao
0
<p>在云服务器上部署了一个后台管理系统,现在想只能让公司内部可以访问,外部人员访问时该网站就跟不存在一样的那样,请问该如何设置?如果过程很冗长,麻烦提供一些关键词给我搜索。我在百度搜索诸如如何让部署在云服务器的网站只能公司内部访问这种,出来的全部都是如何在云服务器部署网站的东西,实在是让人难受。</p>
github action 进入终端不知道怎么退出,一直到超时后失败?
https://segmentfault.com/q/1010000044553401
2024-01-14T23:04:52+08:00
2024-01-14T23:04:52+08:00
清风伴酒
https://segmentfault.com/u/aipaobudeyaling
0
<p>github action失败</p><pre><code class="yml"> name: Load and run Docker image on the server
uses: appleboy/ssh-action@master # 使用社区 action 来通过 SSH 连接服务器并执行命令
with:
host: ${{ secrets.DEPLOY_HOST }}
username: ${{ secrets.DEPLOY_USER }}
key: ${{ secrets.DEPLOY_KEY }}
script: |
docker stop blog-server #停止老镜像
docker rm blog-server
docker load < /tmp/lxy-blog-server.tar.gz # 加载 Docker 镜像
docker run --name blog-server --net=host server-prod # 使用 Docker Compose 启动服务
exit</code></pre><p>就是我通过github action把最新的代码生成的docker镜像推送到服务器上<br>再通过script部分的命令让新镜像运行起来<br>这里会出现github action 无法结束的问题<br>一直到超时直接失败<br>虽然失败并没有影响我的功能,但是每次action失败也不好看</p><blockquote>每次会得到这样一个结果<br>2024/01/07 10:58:57 Run Command Timeout</blockquote><p>应该是我的退出方式不对,但不知道怎么修改</p><p>我尝试过的方式就是最后一行代码加上了exit<br>这个并没有效果</p>
docker 构建镜像出现 INTERNAL_ERROR 失败?
https://segmentfault.com/q/1010000044551449
2024-01-13T18:08:49+08:00
2024-01-13T18:08:49+08:00
changli
https://segmentfault.com/u/changli
0
<p><img width="723" height="477" src="/img/bVda51L" alt="" title=""></p><p>功能是使用一个基础的操作系统,然后执行一个 shell 脚本。.dockerignore 是空的,删除了或者加内容依然会报错,应该对构建是否成功没有影响。</p><pre><code>FROM ubuntu
WORKDIR /app
COPY . .
CMD ["/app/helloworld.sh"]
</code></pre><p>shell 脚本</p><pre><code class="shell">#!/bin/bash
echo 'hello world'</code></pre><p>报了以下的错误</p><pre><code>ERROR: failed to solve: Internal: Internal: Internal: stream terminated by RST_STREAM with error code: INTERNAL_ERROR</code></pre><p>如果是使用官方文档提供的应用例子,按流程执行是没有问题的。例如</p><pre><code>FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000</code></pre><p>这是没有问题的</p><p>这个简单例子为什么会报错?</p><p>好像解决了问题,但是问题的原因更加摸不着头脑,就是换一个目录就没有问题了<br><img width="723" height="392" src="/img/bVda52I" alt="" title=""></p>
docker,pm2部署nuxt项目启动报错?
https://segmentfault.com/q/1010000044541478
2024-01-10T11:01:55+08:00
2024-01-10T11:01:55+08:00
练练的da
https://segmentfault.com/u/mr_q__4_6
0
<p>项目使用 <strong>nuxt2</strong> 写的, 现在进行了 npm run build 打包, 想在生产环境中使用docker部署,部署方式如下</p><h3>上传项目</h3><p>将打包后的 <strong>.nuxt</strong>, <strong>static</strong>, <strong>nuxt.config.js</strong>, <strong>package.json</strong>, <strong>node_modules</strong>上传至服务器 /home/wwwroot/default 下</p><p>(nginx等的配置略)</p><h3>在项目目录创建启动文件</h3><p>ecosystem.config.json</p><pre><code class="json">{
"apps": {
"name": "web",
"cwd": "/home/node",
"script": "./node_modules/nuxt/bin/nuxt.js",
"args": "-c /home/node/nuxt.config.js",
"exec_mode": "cluster",
"instances": 4,
"watch":false,
"error_file": "/home/logs/web-err.log",
"out_file": "/home/logs/web-out.log",
"log_date_format": "YYYY-MM-DD HH:mm:ss",
"autorestart": true,
"max_memory_restart": "500M"
}
}
</code></pre><h3>一. 使用node镜像构建</h3><p><strong>直接基于拉取的node:16.14.0镜像运行一个容器</strong></p><pre><code>$ docker pull node:16.14.0
$ docker run -itd --name nuxtProject01 \
-p 3000:3000 \
-v /home/wwwroot/default:/home/node \
-v /home/logs:/home/logs node:16.14.0</code></pre><p>然后进入容器内</p><pre><code>$ docker exec -it nuxtProject01 bash</code></pre><p>安装pm2</p><pre><code>$ npm install pm2 -g</code></pre><p>进入容器项目目录并执行ecosystem.config.json</p><pre><code>$ cd /home/node && pm2 start ecosystem.config.json</code></pre><p><strong>项目可以正常运行</strong></p><h3>二. 现在使用Dockerfile的方式</h3><p><strong>Dockerfile文件:</strong></p><pre><code class="bash">FROM node:16.14.0
RUN npm config set registry https://registry.npm.taobao.org
RUN npm install pm2@5.2.0 -g
WORKDIR /home/node
ENV NODE_ENV=production
ENV HOST 0.0.0.0
EXPOSE 3000
CMD ["pm2-runtime", "start", "ecosystem.config.json"]</code></pre><p><strong>使用该 Dockerfile 构建自定义镜像:</strong></p><pre><code class="bash">$ docker build --no-cache -t nuxtImage:v1.0 .</code></pre><p><strong>基于镜像运行一个容器:</strong></p><pre><code class="bash">$ docker run -itd --name nuxtProject01 \
-p 3000:3000 \
-v /home/wwwroot/default:/home/node \
-v /home/logs:/home/logs nuxtImage:v1.0</code></pre><p>现在访问服务,有以下报错:</p><pre><code>No build files found in /home/node/ecosystem.config.json/.nuxt/dist/server. 02:09:50
Use either `nuxt build` or `builder.build()` or start nuxt in development mode.
Use either `nuxt build` or `builder.build()` or start nuxt in development mode.
at VueRenderer._ready (node_modules/@nuxt/vue-renderer/dist/vue-renderer.js:4219:13)
at async Server.ready (node_modules/@nuxt/server/dist/server.js:637:5)
at async Nuxt._init (node_modules/@nuxt/core/dist/core.js:720:7)</code></pre><pre><code>✖ Nuxt Fatal Error │
│ │
│ Error: No build files found in /home/node/ecosystem.config.json/.nuxt/dist/server. │
│ Use either `nuxt build` or `builder.build()` or start nuxt in development mode.</code></pre><p><strong>找不到原因头疼</strong></p>
k8s集群内的Pod一般是只有一个IP地址端口是吗?
https://segmentfault.com/q/1010000044515863
2023-12-29T16:41:18+08:00
2023-12-29T16:41:18+08:00
letier
https://segmentfault.com/u/mark04
0
<p>k8s集群内的Pod一般是只有一个IP地址端口是吗?</p>
windows ubuntu 20.04安装docker后要如何启动?
https://segmentfault.com/q/1010000044525936
2024-01-04T10:43:55+08:00
2024-01-04T10:43:55+08:00
rain
https://segmentfault.com/u/rainlucky
0
<p>如题,百度都是用的systemctl、service 都不行,到底用的哪个命令启动?<br><img width="596" height="157" src="/img/bVdaZof" alt="image.png" title="image.png"><br><img width="723" height="81" src="/img/bVdaZpX" alt="image.png" title="image.png"><br><img width="432" height="124" src="/img/bVdaZrb" alt="image.png" title="image.png"></p>
window 系统下 docker volumn 的实际放置位置在哪里?
https://segmentfault.com/q/1010000044525087
2024-01-04T01:33:30+08:00
2024-01-04T01:33:30+08:00
changli
https://segmentfault.com/u/changli
0
<p>通过 docker 命令创建了一个 volumn 的卷,然后执行</p><p><code>docker volume inspect todo-db</code></p><pre><code>[
{
"CreatedAt": "2024-01-03T17:09:01Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
"Name": "todo-db",
"Options": null,
"Scope": "local"
}
]</code></pre><p>文档说 Mountpoint 是 volumn 在磁盘上的实际位置。</p><p>上面的输出是在 window 系统下输出的,那么这个实际位置究竟在哪里?</p>
如何在php容器中编译mysqli扩展?
https://segmentfault.com/q/1010000044521574
2024-01-02T21:40:20+08:00
2024-01-02T21:40:20+08:00
助人等于助己
https://segmentfault.com/u/dagang007
0
<p>进入php源码目录中,配置需要提供mysql_config文件的位置,而且mysql_config文件又要去找mysql其他文件。</p><pre><code>./configure --with-php-config=/usr/local/bin/php-config --with-mysqli=/usr/bin/mysql_config
</code></pre><p>由于我的php 和mysql 是2个容器 ,不在一个文件系统里,这可怎么办呀?</p>
docker+nginx部署的前端问题?
https://segmentfault.com/q/1010000044494440
2023-12-22T14:03:53+08:00
2023-12-22T14:03:53+08:00
庭
https://segmentfault.com/u/ting_6583ed85a018b
0
<p>docker+nginx部署的前后端分离项目,如果我nginx没有配置代理的话就可以打印index.html的内容但是空白页面,我一旦nginx代理了,就报系统异常,是怎么回事呀<br>没有启用nginx代理的情况<br><img width="723" height="403" src="/img/bVdaRcc" alt="image.png" title="image.png"><br><img width="723" height="269" src="/img/bVdaRb5" alt="image.png" title="image.png"><br>启用nginx代理的情况<br><img width="606" height="749" src="/img/bVdaRcf" alt="image.png" title="image.png"><br><img width="723" height="294" src="/img/bVdaRci" alt="image.png" title="image.png"></p><p>能部署上docker前端页面也不至于空白页面呀,后端已经部署上docker上并用postman测试了能调用数据</p>
neo4j调试日志中大量Inbound message queue has exceeded high watermark是什么原因?有哪些影响?如何解决?
https://segmentfault.com/q/1010000044503307
2023-12-26T10:36:03+08:00
2023-12-26T10:36:03+08:00
sswhsz
https://segmentfault.com/u/sswhsz
0
<p>项目中使用了neo4j,最近迁移了一个新环境,结果发现服务运行不稳定,有时候报错:在neo4j事务提交时,等待某个节点的锁超时(30秒),有时候驱动程序报错:客户端连接被终止。</p><p>因为之前对neo4j仅仅限于简单应用。所以对neo4j配置、日志等并不了解。<br>重新花时间了解了下neo4j的基本配置(主要是内存配置),尝试查看下neo4j的调试日志。</p><p>目前neo4j是放在docker容器运行的。配置方法如下:</p><pre><code class="yml">services:
neo4j:
image: bitnami/neo4j:5.3.0
environment:
NEO4J_PASSWORD: 12345678
ports:
# 客户端页面访问端口
- "7474:7474"
# bolt协议访问端口
- "7687:7687"
volumes:
- "./neo4j_data:/bitnami/neo4j/data"
# 内存及其它配置(详见neo4j.conf文件内中文说明)
- "./conf:/bitnami/neo4j/conf"
# 日志
- "./logs:/opt/bitnami/neo4j/logs"
restart: always</code></pre><p>在neo4j.conf配置中,主要设置了最大内存4G(因为neo4j中节点的数量为 61万个,感觉4G应该足够用了):</p><pre><code class="conf"># 修改内存设置
server.memory.heap.initial_size=256m
server.memory.heap.max_size=4096m</code></pre><p>目前查看neo4j的调试日志,发现有大量的 <code>Inbound message queue has exceeded high watermark - Disabling message processing</code>,那么这些错误会带来什么影响呢?会导致事务执行失败?还是仅仅是响应速度慢?应该如何解决呢?</p><hr><p>补充:2023-12-27<br>今天取了neo4j源码,查看了下出问题的代码;另外,启用了neo4j服务端调试功能,挂上断点初步查看了下爆出警告日志的地方,对bolt协议的处理入口有了简单的第一印象,网上初步搜了下neo4j源码赏析、bolt协议解析等内容,可惜没有多少有价值的东西。neo4j的服务端代码还是挺优雅的,后面有时间做一下跟踪分析。</p><pre><code class="xml"><!--neo4j服务端-->
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-community</artifactId>
<version>5.3.0</version>
</dependency> </code></pre><p>在neo4j.conf文件中开放调试端口</p><pre><code class="conf"># 开放调试端口
server.jvm.additional=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005</code></pre>
Docker中容器网络不通,宿主机跟容器,容器跟docker0,容器跟容器网络都不通?
https://segmentfault.com/q/1010000044492989
2023-12-22T09:27:56+08:00
2023-12-22T09:27:56+08:00
mahy
https://segmentfault.com/u/itmahy
0
<p>Docker中容器网络不通,宿主机跟容器,容器跟docker0,容器跟容器网络都不通?</p><p>使用 <br><code> tcpdump -i docker0</code> 抓包时容器的网络就通了,但是关闭<code> tcpdump -i docker0</code>容器的网络又不通了。</p>
neo4j docker镜像 bitnami/neo4j 怎样设置内存大小呢?
https://segmentfault.com/q/1010000044499596
2023-12-25T10:24:04+08:00
2023-12-25T10:24:04+08:00
sswhsz
https://segmentfault.com/u/sswhsz
0
<p>neo4j镜像使用的是 bitnami/neo4j:5.3.0 ,在环境变量中并没有提供设置内存的选项,所以我将 镜像 /opt/bitnami/neo4j/conf/neo4j.conf 拷贝出来,指定内存后,按镜像说明,将配置映射到/bitnami/neo4j/conf目录, 启动没有报错,但是7474端口无法访问了。</p><p>docker-compose文件配置如下:</p><pre><code class="yml">version: '3.3'
services:
neo4j:
image: bitnami/neo4j:5.3.0
environment:
NEO4J_PASSWORD: 12345678
ports:
# 客户端页面访问端口
- "7474:7474"
# bolt协议访问端口
- "7687:7687"
volumes:
- "neo4j_data:/bitnami/neo4j/data"
- "./conf:/bitnami/neo4j/conf"
restart: always
volumes:
neo4j_data:</code></pre><p>neo4j.conf文件是从 镜像中拷贝出来的,只是修改了:server.memory.heap.max_size=2048m</p><p>内存大小要怎样设置呢?</p><hr><p>2023-12-25 补充:<br>实际测试、对比了不同修改方式,最终生成的 neo4j启动java命令、以及neo4j /opt/bitnami/neo4j/conf/neo4j.conf 的内容,配置原则应该是:</p><ol><li>bitnami提供了一些环境变量,如果仅仅是更改端口,使用环境变量就可以,比如:NEO4J_BIND_ADDRESS (缺省为 0.0.0.0, 这指示在所有网络接口上监听相应端口)、NEO4J_BOLT_PORT_NUMBER(缺省为 7687,声明bolt协议端口)</li><li>此外,bitnami也支持用户提供原始的 neo4j.conf文件,但如果提供了这个文件,就会忽略和neo4j.conf文件相关的环境变量的设置。</li><li><p>内存设置,目前没有对应的环境变量支持,只能修改 neo4j.conf文件,而缺省的 neo4j.conf文件中,监听地址缺省为 localhost,也就是说,此时只能在本机访问7474端口。在neo4j的启动日志中也能看到:访问地址为 <a href="https://link.segmentfault.com/?enc=D2CRaajcb6ijNYkfFXF6Zg%3D%3D.wPl2WxpX86BBM4wMvNT93uI1alrQMG0BK%2B%2BLotJHYWo%3D" rel="nofollow">http://localhost:7474</a> ,正常情况为:<a href="https://link.segmentfault.com/?enc=Pg5E4ugJNSIlw1mqhxfXpQ%3D%3D.OdAUNhPemo7a0Tc6ucWZz0tpnNXNCDC%2F%2FIe1Dy38f4E%3D" rel="nofollow">http://0.0.0.0:7474</a> ,所以修改内存配置,需要同时放开 监听地址配置,在neo4j.conf文件中有如下说明:</p><pre><code class="conf"># With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
#server.default_listen_address=0.0.0.0</code></pre></li><li>设置内存后,neo4j的java启动命令后将自动附加内存配置:类似<code>-Xms2097152k -Xmx4194304k </code></li></ol>
解决Nginx + Docker 部署前后端分离项目访问空白问题?
https://segmentfault.com/q/1010000044491074
2023-12-21T15:58:04+08:00
2023-12-21T15:58:04+08:00
庭
https://segmentfault.com/u/ting_6583ed85a018b
0
<p>很奇怪 使用nginx+docker部署的前后端分离项目 部署上去之后访问的时候出现空白也没有报错,然后按照网上的教程,改了vue的配置文件。也没有效果,不管前面加了点还是没加就是出现不了页面,都是空白的 不知道是什么导致了 很迷茫 有懂的吗 帮忙解决一下呗!</p><p>改了vue的打包配置 不管是加了点还是没加都显示不出页面 <br><img width="540" height="422" src="/img/bVdaQj7" alt="image.png" title="image.png"><br><img width="726" height="331" src="/img/bVdaQj3" alt="1703145316544.jpg" title="1703145316544.jpg"> <br><img width="527" height="170" src="/img/bVdaQj5" alt="image.png" title="image.png"></p>
docker部署问题,COPY 命令没有成功?
https://segmentfault.com/q/1010000044490734
2023-12-21T15:06:42+08:00
2023-12-21T15:06:42+08:00
leyioliu
https://segmentfault.com/u/leyioliu
0
<p>docker部署项目,目录结构如下<br><img width="534" height="654" src="/img/bVdaQd0" alt="image.png" title="image.png"></p><p>web下是三个vue打包后的文件,分别是admin,blog,datascreen<br>web下的Dockerfile配置如下</p><pre><code># 使用 Nginx 作为基础镜像
FROM nginx:latest
RUN mkdir -p /usr/share/nginx/html/admin
RUN mkdir -p /usr/share/nginx/html/blog
RUN mkdir -p /usr/share/nginx/html/datascreen
# 将文件复制到 Nginx 默认的静态文件目录中
COPY admin /usr/share/nginx/html/admin/
COPY blog /usr/share/nginx/html/blog/
COPY datascreen /usr/share/nginx/html/datascreen/</code></pre><p>docker-compose配置如下</p><pre><code>version: '3'
services:
web:
build: ./web
container_name: web-blog-container
todo-nodejs-api:
build: ./todo-nodejs-api
container_name: todo-nodejs-api-container
restart: unless-stopped
ports:
- '8888:8888'
depends_on:
- mysql
mysql:
image: mysql:8.0.35
container_name: mysql-container
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: xxxxxx
MYSQL_DATABASE: xxxxxx
ports:
- '3307:3306'
volumes:
- /var/lib/mysql:/var/lib/mysql
nginx:
image: nginx:latest
container_name: nginx-container
ports:
- '80:80'
- '443:443'
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- /etc/nginx/cert:/etc/nginx/cert:ro
depends_on:
- web
- todo-nodejs-api
</code></pre><p>docker compose up 运行后,在nginx服务容器中(nginx-container),<br>docker exec -it nginx-container ls /usr/share/nginx/html</p><h2>发现并没有copy过来的 admin,blog,datascreen 目录,问题出在哪?该怎么办,这样配置合理吗?</h2><p>我现在直接在nginx容器上挂了一下 volumes,把这三个目录映射过来,这样做合适吗?想知道上面为什么copy不过去呢????</p><pre><code> nginx:
image: nginx:latest
container_name: nginx-container
ports:
- '80:80'
- '443:443'
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- /etc/nginx/cert:/etc/nginx/cert:ro
- /home/ftpuser/my-blog/web/admin:/usr/share/nginx/html/admin:ro
- /home/ftpuser/my-blog/web/blog:/usr/share/nginx/html/blog:ro
- /home/ftpuser/my-blog/web/datascreen:/usr/share/nginx/html/datascreen:ro
depends_on:
- web
- todo-nodejs-api</code></pre>
如何在无网络情况下离线安装Docker和Node.js?
https://segmentfault.com/q/1010000044486387
2023-12-20T11:03:28+08:00
2023-12-20T11:03:28+08:00
rain
https://segmentfault.com/u/rainlucky
0
<p>docker 没有网络的话怎么安装nodejs、mysql等?</p><p>期望的结果:离线使用docker 安装各需要软件</p>
拆分成微服务疑问,按 controller 还是按照 project 拆?
https://segmentfault.com/q/1010000044470799
2023-12-14T11:22:24+08:00
2023-12-14T11:22:24+08:00
小MIS
https://segmentfault.com/u/ithinkilive
0
<p>拆分成微服务疑问,按 controller 还是按照 project 拆?<br>先说个人不懂微服务,也没搞懂过<br>按照我的经验,通常就是拆分 controller 跟 service 由不同同事负责<br>不会刻意拆分不同 project,除非像是统一账号验证才会额外拆</p><p>但现在遇到一个顾问说,微服务要尽量拆分到不同 project 维护,各自有自己的 docker<br>这样才不会有严重依赖耦合<br>我不太能理解这样概念,工程复杂度直线上升</p>
在 wsl 中启动容器实例,相关资源默认存储在什么地方?
https://segmentfault.com/q/1010000044455468
2023-12-08T14:08:42+08:00
2023-12-08T14:08:42+08:00
changli
https://segmentfault.com/u/changli
0
<p>目前在 window 系统下使用了 wsl2,使用了默认的发行版 Ubuntu。然后安装了 docker desktop,并集成了 Ubuntu。然后在 Ubuntu 中启用了多个容器实例,这个时候发现 C 盘的磁盘容量几乎没有了。</p><p>请问一下像发行版 Ubuntu 系统中的内容是保存在什么地方,还有对于容器使用的镜像和容器实例内使用的资源是保存到什么地方?</p>
Jenkins容器agent是如何在没有docker的情况下依然可以运行docker命令的?
https://segmentfault.com/q/1010000044430159
2023-11-29T18:08:32+08:00
2023-11-29T18:08:32+08:00
Pkiuper
https://segmentfault.com/u/user_kx6h3scu
0
<p>为什么我的Jenkins容器agent可以在容器内并没有docker的情况下,在流水线内依然可以运行docker? jenkins是如何做到的?</p><p>本来是我一直把docker以及docker.sock挂载到容器中去的,今天突发奇想去掉了挂载,发现依然可以执行,真的很神奇。</p><p>这AI的回答没太看懂,什么叫容器里即便没有docker,但jenkins代理安装了docker, jenkins agent不是运行在这个临时run起来的容器中的么。</p><p>我使用docker agent templet来执行job。<br><img width="723" height="360" src="/img/bVdaAtt" alt="image.png" title="image.png"></p><p>这里是我在一个流水线的执行过程中,进入到容器里,可以看到容器里是没有docker的。<br><img width="723" height="165" src="/img/bVdaAtn" alt="image.png" title="image.png"><br>但神奇的是,流水线中代码的 sh'docker xxxx' ,是可以执行的。<br><img width="723" height="378" src="/img/bVdaAts" alt="image.png" title="image.png"></p><p>同时还有一点神奇的,在同一个jenkins集群内,我为多个docker host都配置了同样的agent templet,区别是lable不同,当我调度docker host为其他主机的label, 流水线里执行sh'docker info' ,可以观察到,无论容器最终在哪个docker host上执行,流水线内通过docker info可以看到,docker.sock都是指向同一台主机的。</p>
linux宿主机只有一块物理网卡,如何使用`macvlan`的方式,让docker容器和宿主机在同一网段下?
https://segmentfault.com/q/1010000044428117
2023-11-29T09:50:38+08:00
2023-11-29T09:50:38+08:00
643104191
https://segmentfault.com/u/643104191
0
<p>看了很多docker使用<code>macvlan</code>创建网络的教程,<br>都是指定另一个子网和网关,<br>似乎没办法访问宿主机和宿主机所在网段下的其他设备.</p><p>不知道怎么让docker容器和宿主机在同一网段下</p>
如何在Windows 10下安装WSL?
https://segmentfault.com/q/1010000044425603
2023-11-28T12:59:49+08:00
2023-11-28T12:59:49+08:00
乔治的春天
https://segmentfault.com/u/qiaozhidechuntian
0
<p>windows10 下面安装wsl --install 我平常用惯了了centos 但是这里好像没有 应该怎么处理呢<br><img width="450" height="195" src="/img/bVdazh5" alt="19fe989f40d52e23d1f56780466cef4.png" title="19fe989f40d52e23d1f56780466cef4.png"><br>打开docker desktop<br><img width="550" height="274" src="/img/bVdazh6" alt="image.png" title="image.png"></p><p>我想使用dockerdesktop 我查看我的w10版本是符合的</p>
docker命令安装redis,报错?
https://segmentfault.com/q/1010000044419607
2023-11-25T21:25:08+08:00
2023-11-25T21:25:08+08:00
唯见长江天际流
https://segmentfault.com/u/changkong
0
<pre><code>docker volume create data_redis
docker run -v data_redis/redis.conf:/etc/redis/redis.conf \
-v data_redis/data:/data \
-d --name some-redis \
-p 6379:6379 \
redis:latest redis-server /etc/redis/redis.conf</code></pre><p>我先在宿主机创建了一个目录卷积data_redis,然后映射/etc/redis/redis.conf到宿主机的data_redis目录,映射/data到宿主机的data_redis/data目录,执行命令后,data_redis目录倒是创建了,但是后面的语句报错<br>docker: Error response from daemon: create data_redis/redis.conf: "data_redis/redis.conf" includes invalid characters for a local volume name, only "a-zA-Z0-9" are allowed. If you intended to pass a host directory, use absolute path.<br>See 'docker run --help'.</p><p>为什么呢,我要怎么改?</p>
华为云的EulerOS 2.0系统怎么安装docker-ce?
https://segmentfault.com/q/1010000044408225
2023-11-21T18:50:41+08:00
2023-11-21T18:50:41+08:00
Jinyun
https://segmentfault.com/u/jinyun0927
0
<p>初次使用华为云,安装docker时发现很多问题。<br>网上的方案大多分为两种:<br>1、下载包,之后解压<br>2、使用阿里云的源下载<br>但我既然用了华为云,还用阿里云的源就感觉怪怪的,就提了工单问了华为云的客服,最后总结了一下,在这来个自问自答~</p>
docker 拉取最新版本,发现并不是最新的?
https://segmentfault.com/q/1010000044394227
2023-11-16T14:42:36+08:00
2023-11-16T14:42:36+08:00
大白兔
https://segmentfault.com/u/974908457
0
<p>docker pull 这个命令 我当时想的是拉取最新版本的镜像 <br><img width="560" height="118" src="/img/bVdaq7E" alt="image.png" title="image.png"></p><p>结果发现tag是latest 但是版本不是最新的 dockerhub库中有最新的 这是不是说明我的docker拉取的仓库不对啊 怎么修改 或配置 我配置了阿里加速<br>下面是配置的阿里的<br><img width="502" height="35" src="/img/bVdaq8J" alt="image.png" title="image.png"></p>
若依项目怎么通过docker容器打包镜像并部署到微信云托管?
https://segmentfault.com/q/1010000044357181
2023-11-02T12:15:55+08:00
2023-11-02T12:15:55+08:00
Fick
https://segmentfault.com/u/fick
0
<p>我是前端开发,被部署的问题折腾了很久,在网上也没有找到有用的信息,所以发帖求助。</p><p>背景:我用若依前后端分离版(<a href="https://link.segmentfault.com/?enc=za4CppxBHdlcvivFxIHCjw%3D%3D.ZLrBom7eGbDChmW1uIrFkX3TQd9Fw9LGszypr40z14LnPt4%2Bf65%2BdQV%2BqodrOOu1" rel="nofollow">若依前后端分离-Vue</a>)做了一个简单后台,因为没有后端基础,所以想借助微信云托管来进行服务部署,目前这个云托管服务也是免费的。</p><p>微信云托管是自动部署的,我用若依部署老是失败,问了他们客服,客服说云托管是基于docker的,所以项目必须是以容器的方式部署,建议我先尝试本地镜像构建。然后我安装了相关工具在本地调试。</p><p>因为若依后台本身没有Dockerfile,所以我把微信云托管的示例(<a href="https://link.segmentfault.com/?enc=mdCWhDX6UrB0eTAdp9HwAw%3D%3D.Wq89ZX1XbX8m8r2zf6XPNUpjGsWQ5E4Z4MTxwbgSu%2FW%2BSOHBB2MQ26jDpMdxKh5rtfi3nUUG6xZon2HgthF0CZDOfYX33nv7DIl2kDyQlmw%3D" rel="nofollow">部署模板</a>)项目中的Dockerfile和setting.xml直接拷贝到了若依的根目录,然后按照微信提供的调试文档(<a href="https://link.segmentfault.com/?enc=2SZRNQrXbziZHyVYWbwKMQ%3D%3D.VYnY4OO%2BUwfOPYO6UAsnMiaTVkU0zIHNfjdEC05AniHTrRJuUE13sPUpgvBiPrcRDyrHRpHtpMpaPFeObgC0dsFi%2F1hpf78mrNZsty5Q46I%3D" rel="nofollow">调试文档</a>)来调试若依的这个项目,但是在构建的时候总是报这个错,<img width="723" height="325" src="/img/bVdahur" alt="image.png" title="image.png">目前卡着这里,不知道怎么处理了,希望有大佬能够提供帮助,非常感谢!</p><p>这是项目根目录:<br><img width="295" height="789" src="/img/bVdahus" alt="image.png" title="image.png"><br>这是Dockerfile,从部署模板copy来:</p><pre><code># 二开推荐阅读[如何提高项目构建效率](https://developers.weixin.qq.com/miniprogram/dev/wxcloudrun/src/scene/build/speed.html)
# 选择构建用基础镜像。如需更换,请到[dockerhub官方仓库](https://hub.docker.com/_/java?tab=tags)自行选择后替换。
FROM maven:3.6.0-jdk-8-slim as build
# 指定构建过程中的工作目录
WORKDIR /
# 将src(原本是src,这里改为了ruoyi-admin)目录下所有文件,拷贝到工作目录中src目录下(.gitignore/.dockerignore中文件除外)
COPY ruoyi-admin /app/src
# 将pom.xml文件,拷贝到工作目录下
COPY settings.xml pom.xml /app/
# 执行代码编译命令
# 自定义settings.xml, 选用国内镜像源以提高下载速度
RUN mvn -s settings.xml -f /app/pom.xml clean package
# 选择运行时基础镜像
FROM alpine:3.13
# 安装依赖包,如需其他依赖包,请到alpine依赖包管理(https://pkgs.alpinelinux.org/packages?name=php8*imagick*&branch=v3.13)查找。
# 选用国内镜像源以提高下载速度
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tencent.com/g' /etc/apk/repositories \
&& apk add --update --no-cache openjdk8-jre-base \
&& rm -f /var/cache/apk/*
# 容器默认时区为UTC,如需使用上海时间请启用以下时区设置命令
# RUN apk add tzdata && cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo Asia/Shanghai > /etc/timezone
# 使用 HTTPS 协议访问容器云调用证书安装
RUN apk add ca-certificates
# 指定运行时的工作目录
WORKDIR /app
# 将构建产物jar包拷贝到运行时目录中
COPY --from=build /app/target/*.jar .
# 暴露端口
# 此处端口必须与「服务设置」-「流水线」以及「手动上传代码包」部署时填写的端口一致,否则会部署失败。
EXPOSE 8080
# 执行启动命令.
# 写多行独立的CMD命令是错误写法!只有最后一行CMD命令会被执行,之前的都会被忽略,导致业务报错。
# 请参考[Docker官方文档之CMD命令](https://docs.docker.com/engine/reference/builder/#cmd)
CMD ["java", "-jar", "/app/springboot-wxcloudrun-1.0.jar"]
</code></pre><p>setting.xml没有任何改动。</p><p>按照@fefe提供的方法有了进展,不过后面还有新的问题,截图补充:<br><img width="723" height="477" src="/img/bVdahv2" alt="image.png" title="image.png"><br><img width="723" height="405" src="/img/bVdahv3" alt="image.png" title="image.png"><br><img width="723" height="427" src="/img/bVdahwb" alt="image.png" title="image.png"><br>新的错误好像是说找不到pom.xml,不过源码的每个木块是有对应文件的,是否是Dockerfile里的配置不够呢?</p><p>补充:@汝何不上九霄,镜像已经启动成功:<br><img width="723" height="439" src="/img/bVdah7d" alt="image.png" title="image.png"><br>不过我试着接口连接27082,27081,8080,都无法访问接口。</p><p>我直接通过这里启动,可以通过8080端口访问接口,但是这里启动好像和在不在容器没有关系,因为,日志产生新的日志,请问我应该怎么正确访问接口。<br><img width="723" height="398" src="/img/bVdah7f" alt="image.png" title="image.png"></p><p>配置文件截图:<br>Dockerfile.development截图:<br><img width="723" height="204" src="/img/bVdaicp" alt="image.png" title="image.png"><br>docker-compose.yml截图:<br><img width="723" height="435" src="/img/bVdaicr" alt="image.png" title="image.png"></p><p>修改Dockerfile.developent后构建镜像:<br><img width="723" height="442" src="/img/bVdairH" alt="image.png" title="image.png"><br>启动镜像:<br><img width="723" height="439" src="/img/bVdairT" alt="image.png" title="image.png"><br>docker客户端:<br><img width="723" height="419" src="/img/bVdairW" alt="image.png" title="image.png"></p><p>接口请求用8080端口,请求超时;请求用端口51747返回404</p><p>补充信息:<br><img width="723" height="383" src="/img/bVdaitV" alt="image.png" title="image.png"></p><p>start之前是这样的:<br><img width="723" height="408" src="/img/bVdaiwv" alt="image.png" title="image.png"><br><img width="723" height="255" src="/img/bVdaiwx" alt="image.png" title="image.png"><br><img width="723" height="267" src="/img/bVdaiwy" alt="image.png" title="image.png"><br><img width="723" height="398" src="/img/bVdaiwJ" alt="image.png" title="image.png"></p><p>:80的端口没有搜索到,全局搜索只有这个文件有8080:<br><img width="440" height="605" src="/img/bVdaiwO" alt="image.png" title="image.png"></p><p>这是我start的过程。然后有若依有一个对应的前端vue项目,我启动了,访问地址是用8080的端口:<br><img width="677" height="349" src="/img/bVdaiwY" alt="image.png" title="image.png"></p><p>访问前端项目,接口会报超时的错误。大概是这样一个过程。</p>
统信UOS上 docker无法启动?
https://segmentfault.com/q/1010000044344004
2023-10-28T15:18:06+08:00
2023-10-28T15:18:06+08:00
sswhsz
https://segmentfault.com/u/sswhsz
0
<p>我是按以下方式操作的:</p><p>1、查看操作系统版本<br>hostnamectl,查看到信息:</p><pre><code> Operating System: UnionTech OS Desktop 20 Pro
Kernel: Linux 4.19.71-arm64-desktop
Architecture: arm64</code></pre><p>2、查看UOS底层debian版本:<br>cat /etc/debian_version<br><code>看到 基于debian 10.5</code><br>debian 10.x版本 ,代号为:buster (相关开源软件下载时,如果有对应系统和版本,可以选 debian buster 最为接近)</p><p>3、编辑 /etc/apt/source.list,添加docker的apt源<br>添加下面一行:</p><pre><code>deb [arch=arm64] https://download.docker.com/linux/debian buster stable</code></pre><p>4、添加docker官方证书 (解决此问题:apt update 会失败----由于没有公钥,无法验证docker相关的签名)</p><pre><code>curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -</code></pre><p>5、更新apt源,然后安装docker</p><pre><code>apt update
apt install docker-ce docker-ce-cli containerd.io</code></pre><p>6、检查下 docker版本:<br>docker version命令提示错误:“Cannot connect to the Docker daemon”</p><p>7、执行systemctl restart docker报错:</p><pre><code>Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.</code></pre><p>执行“systemctl status docker.service”,似乎没有什么有价值的错误。</p><pre><code>● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2023-10-28 15:12:26 CST; 42s ago
Docs: https://docs.docker.com
Process: 14988 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 14988 (code=exited, status=1/FAILURE)
10月 28 15:12:26 uos-ZJ0063 systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
10月 28 15:12:26 uos-ZJ0063 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
10月 28 15:12:26 uos-ZJ0063 systemd[1]: Stopped Docker Application Container Engine.
10月 28 15:12:26 uos-ZJ0063 systemd[1]: docker.service: Start request repeated too quickly.
10月 28 15:12:26 uos-ZJ0063 systemd[1]: docker.service: Failed with result 'exit-code'.
10月 28 15:12:26 uos-ZJ0063 systemd[1]: Failed to start Docker Application Container Engine.</code></pre><p>继续执行:“journalctl -xe”<br>发现有错误:</p><pre><code>10月 28 14:53:06 uos-ZJ0063 dockerd[10548]: time="2023-10-28T14:53:06.784647548+08:00" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.71-arm64-desktop/modules.dep.bin'\nmodprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.71-arm64-desktop\nmodprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.19.71-arm64-desktop/modules.dep.bin'\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.71-arm64-desktop\n, error: exit status 1"
10月 28 14:53:06 uos-ZJ0063 dockerd[10548]: time="2023-10-28T14:53:06.826205844+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
10月 28 14:53:06 uos-ZJ0063 dockerd[10548]: time="2023-10-28T14:53:06.842479493+08:00" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
10月 28 14:53:06 uos-ZJ0063 dockerd[10548]: time="2023-10-28T14:53:06.842976863+08:00" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
10月 28 14:53:06 uos-ZJ0063 dockerd[10548]: failed to start daemon: Error initializing network controller: error creating default "bridge" network: Failed to program NAT chain: Failed to inject DOCKER in PREROUTING chain: iptables failed: iptables --wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER: iptables v1.8.2 (legacy): Couldn't load match `addrtype':No such file or directory
10月 28 14:53:06 uos-ZJ0063 dockerd[10548]: Try `iptables -h' or 'iptables --help' for more information.</code></pre><p>网上查了一下,说是安装 “apt install bridge-utils”<br>但执行完,重启docker,还是一样的错误。</p><p>有热心的思友知道是什么问题吗?多谢解答!</p>
在Kubernetes大家是如何对requests进行优化的?
https://segmentfault.com/q/1010000044341647
2023-10-27T15:11:52+08:00
2023-10-27T15:11:52+08:00
inight
https://segmentfault.com/u/inight
0
<p>总是说Kubernetes能够提高资源利用率,在我实际使用过程中,实际的usage/request/limit三个值与node的总资源比例差距很多;大概是</p><table><thead><tr><th> </th><th>使用率</th><th>request</th><th>limit</th></tr></thead><tbody><tr><td>Node-1</td><td>cpu: 20%, mem 30%</td><td>cpu: 83%, mem 90%</td><td>cpu: 210%, mem 260%</td></tr><tr><td>Node-2</td><td>cpu: 18%, mem 32%</td><td>cpu: 76%, mem 87%</td><td>cpu: 310%, mem 290%</td></tr><tr><td>Node-3</td><td>cpu: 34%, mem 26%</td><td>cpu: 85%, mem 80%</td><td>cpu: 400%, mem 320%</td></tr></tbody></table><p>即出现实际上资源占用并无多少,但是node资源已经被分配完毕导致node无法继续分配资源。</p><p>request设计的太小会太过频繁触发横向伸缩,导致客户端响应有时候会丢失,例如缩容时候有几个请求较慢的被强制中断。</p><p>request设计的太大会导致低峰期会导致资源浪费,并且扩容起来也容易导致NODE资源因为空间不足分配不了。</p><ol><li>request该如何设计找到 预设计的资源与我们实际中的使用率的平衡呢?</li><li>可以看到表格中limit的资源超卖现象非常严重,高峰期很容易导致服务群的雪崩,limit又该超卖多少合适呢?</li></ol>
Docker同样的镜像,为什么容器占有内存不一样?
https://segmentfault.com/q/1010000044222007
2023-09-15T15:29:28+08:00
2023-09-15T15:29:28+08:00
可乐.
https://segmentfault.com/u/kele_6065335c4c816
-2
<p>有个问题,我在本地虚拟机和云服务器 使用docker 启动同一个镜像,但本地和远程容器的内存占用差别很大</p><p>我确保使用的都是同一个镜像</p><p>这个是本地的linux中容器内存占用情况 1.4G<br><img width="723" height="190" src="/img/bVc9Ijk" alt="d5edc57f9304277dddee01232700013.png" title="d5edc57f9304277dddee01232700013.png"></p><p>这是远程的服务器容器内存占用情况,远程的这个容器应用并没有其他操作,跟本地相比整整多出快1个G,表示有点不理解 这是为什么 有没有大佬解释下 <br><img width="723" height="217" src="/img/bVc9Ijt" alt="8b1d139369aee15caced7bf5e5a3cd5.png" title="8b1d139369aee15caced7bf5e5a3cd5.png"></p>
k8s 如何设置「尽量反亲和性」?
https://segmentfault.com/q/1010000044337888
2023-10-26T15:32:46+08:00
2023-10-26T15:32:46+08:00
ponponon
https://segmentfault.com/u/ponponon
0
<p>k8s 的反亲和性非常的愚蠢</p><pre><code class="yaml">affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: #设置调度策略。
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- imdb-match-api
topologyKey: kubernetes.io/hostname</code></pre><p>是强制反亲和的</p><p>比如我有 10个 node,需要部署 12个一样的pod</p><p>我希望这 8个节点各部署一个 pod,另外 2 个节点各部署两个 pod</p><p>但是 k8s 设置了反亲和性,就最多只能部署 10 个 pod 了,另外两个 pod 就永远是 pending 状态了</p><p>我只是希望尽量部署在不同机器上,避免一台机器挂了,导致这些 pod 被一锅端,但不强求一个 node 只能部署一个 pod</p><p>我该怎么做呢?</p>
请教下,nginx 能运行,但却不工作?
https://segmentfault.com/q/1010000044219463
2023-09-14T21:52:41+08:00
2023-09-14T21:52:41+08:00
茫然的绿豆
https://segmentfault.com/u/mangrandelvdou
0
<p>我在 centos8 stream 上通过 <code>yum install nginx</code> 安装了 nginx,通过命令 <code>sudo netstat -plutn | grep nginx</code> 检查,nginx 是正常运行的</p><pre><code>tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 9962/nginx: master</code></pre><p>配置文件 <code>nginx.conf</code></p><pre><code>user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
}</code></pre><p>而 <code>conf.d</code> 中的唯一配置文件 <code>xc.conf</code></p><pre><code>server {
listen 80;
return 403;
}</code></pre><p>即通过 ip 访问,则得到 403 的返回。</p><p>现在遇到的问题是,通过 ip 访问,并不能得到 403 的返回。而访问 ip:8080 能正常访问到一个通过 docker 拉起来的应用。 </p><p>感觉 nginx 虽然是运行了,但并没有正常工作。</p><p>PS, 所有的端口都已经打开</p><p>请问这个该怎么排查呢?多谢。</p>
为什么刚刚拉取的 docker 镜像的 digest 镜像和 docker hub 上的不一致?
https://segmentfault.com/q/1010000044296562
2023-10-12T10:41:37+08:00
2023-10-12T10:41:37+08:00
ponponon
https://segmentfault.com/u/ponponon
0
<p><a href="https://link.segmentfault.com/?enc=GUVYvxM2lLpYgrAhLGSRmQ%3D%3D.4opjRkZSddMv6zGXEIu%2FYO8zdIl18qDqJKpQvVWXu8yy7gzrMM4y58I4NrHKEjdI%2FMnuBQlEHomssMtc5L5DvA%3D%3D" rel="nofollow">https://hub.docker.com/_/python/tags?page=1&name=3.10-bullseye</a></p><p><img width="723" height="245" src="/img/bVc91Iv" alt="图片.png" title="图片.png"></p><p>上面是 docker hub 中,python:3.10-bullseye 的 digest 信息</p><p>下面是刚刚执行 docker pull python:3.10-bullseye 输出的 digest 信息</p><pre><code class="shell">╰─➤ docker pull python:3.10-bullseye
3.10-bullseye: Pulling from library/python
ddf874abf16c: Pull complete
5c1459d3ab8b: Pull complete
29ab00e7798a: Pull complete
7883a473306c: Pull complete
c3ab175b762c: Pull complete
9f11cb399571: Pull complete
cfb416449119: Pull complete
1365a3c2b71e: Pull complete
Digest: sha256:e917e3e93525f97fc343135efc2b3295a11b02450ff8d4eeaff25323c73caf6b
Status: Downloaded newer image for python:3.10-bullseye
docker.io/library/python:3.10-bullseye
</code></pre><p>可以看到,本地拉取的 Digest 是 e917e3e93525f97fc343135efc2b3295a11b02450ff8d4eeaff25323c73caf6b</p>