0%

go build 生成的二进制文件被增加到 git 提交记录里面,需要在历史记录清除.

使用 bfg 工具清除

工具下载: https://rtyley.github.io/bfg-repo-cleaner/#download

1
2
3
4
5
6
7
8
# clone 项目 注意加上 --mirror
git clone https://gogs.***.cn/sg/bs.git --mirror
# 清除历史文件
java -jar bfg-1.13.0.jar --delete-files rmi ../work/sg/bs.git

git reflog expire --expire=now --all && git gc --prune=now --aggressive

git push

场景

高并发写多读少的场景下,对 map 的写需要加锁,急剧影响性能.

解决方案

方案1 | 单库变多库.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
package main

import (
"sync"
)

type driverInfo struct {
id int
age int
}

type company struct {
lock []sync.RWMutex
driver map[int]map[int]*driverInfo
count int
}

func newCompany(count int) *company {
lock := make([]sync.RWMutex, count)
driver := make(map[int]map[int]*driverInfo, count)
for i := 0; i < count; i++ {
lock[i] = sync.RWMutex{}
driver[i] = make(map[int]*driverInfo)
}

return &company{
lock: lock,
count: count,
driver: driver,
}
}

func (c *company) getDriver(id int) (derive *driverInfo) {
i := id % c.count
c.lock[i].RLock()
defer c.lock[i].RUnlock()
return c.driver[i][id]
}

func (c *company) setDriver(driver *driverInfo) {
i := driver.id % c.count
c.lock[i].Lock()
defer c.lock[i].Unlock()
c.driver[i][driver.id] = driver
}

方案 2 | 库变表

跟方案 1 类似,减小锁的粒度到字段.潜在的问题是当数据量过大的,锁需要占用大量内存.

方案 3 | 不加锁, 加签名校验数据的合法性

在并发场景下,对同一个值的修改,数据的完整性不能保证.对 key 的 val1,val2 的写入,可能写入val1 成功一半,val2 成功一半,即 val1 half+ val2 half.为了保证数据的正确性.可以在数据的末尾增加 crc32 校验,计算数据的签名.取出数据时,需要校验签名是否正确.签名不正确,直接丢弃,从数据库拉取.

参考链接

最近配了一台服务器,配置清单cpu: e2660内存: 16*2 ecc
使用 proxmox 安装了多台虚拟机,使用其中的 3 台 4c 8g 的搭建了一个 3 节点的 k8s 集群。因为没有公网 ip或者说进不了电信光猫改不了配置,不能直接通过公网访问到内网,所以使用 frp 内网穿透,提供公网访问。
预期实现的功能:

内网穿透配置

流量路径

  • 将域名绑定到具有公网ip的服务器
  • 将80和443端口的流量转发到 fprs http监听的 7080端口。这样流量就会通过 frps -> frpc 到内网宿主机。
  • 将配置的域名的流量都转到本地的 80 端口。
  • 反向代理,将流量代理到 k8s 集群

配置

公网服务器nginx配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name *.ltinyho.top ltinyho.top;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/ltinyho.top/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/ltinyho.top/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:7080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; #配置weboscket
proxy_set_header Connection $connection_upgrade;#配置weboscket
}
}

fprs 配置

1
2
3
4
5
bind_addr = 0.0.0.0
bind_port = 7000
bind_udp_port = 7001
kcp_bind_port = 7000
vhost_http_port = 7080

内网 frpc 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
upstream k8s {
server 192.168.199.111:80;
server 192.168.199.112:80;
server 192.168.199.113:80;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name *.ltinyho.top ltinyho.top;
location / {
proxy_pass http://k8s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}

内网宿主机nginx配置

1
2
3
4
5
6
7
8
9
10
[common]
server_addr = 47.98.137.255
server_port = 7000

[lt-http]
type = http
local_ip = 127.0.0.1
local_port = 80
custom_domains = *.ltinyho.top,ltinyho.top
remote_port = 7080

https 证书通过 letsencrypt 申请的

nginx 配置 websocket

1
2
3
4
5
6
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;

nginx 配置负载均衡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
upstream k8s {
server 192.168.199.111:80;
server 192.168.199.112:80;
server 192.168.199.113:80;
}
server {
listen 80;
server_name *.ltinyho.top ltinyho.top;
location / {
proxy_pass http://k8s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}

参考:

http 1.1 默认开启了 tcp keepalive,复用 tcp 连接. go 中发起 http 请求,如何使用 keepalive 呢?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package main
var (
QHTTPTransport http.RoundTripper = &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 5,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
}

QHTTPClient = &http.Client{
Transport: QHTTPTransport,
}
)

发起 http 请求时,使用 QHTTPClient.
go 服务端 http.ListenAndServe(":8080",nil) 默认开启了 keepalive.

使用 man 7 tcp查看 tcp_keepalive_time,默认保持时间是 7200秒,tcp_keepalive_intvl默认是 75秒
自定义:

1
2
3
4
   srv := http.Server{
Handler: mux,
}
srv.SetKeepAlivesEnabled(false)

或者自定义 listener

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
type tcpKeepAliveListener struct {
*net.TCPListener
}

func (ln tcpKeepAliveListener) Accept() (net.Conn, error) {
tc, err := ln.AcceptTCP()
if err != nil {
return nil, err
}
tc.SetKeepAlive(true)
tc.SetKeepAlivePeriod(time.Second*10) // 设置 keepalive period
return tc, nil
}
func main(){
err = srv.Serve(tcpKeepAliveListener{ln.(*net.TCPListener)})
}

gin 框架自定义 keepalive

1
2
3
4
5
6
7
8
router := gin.Default()

s := &http.Server{
Addr: ":8080",
Handler: router, // < here Gin is attached to the HTTP server
}
s.SetKeepAlivesEnabled(false)
s.ListenAndServe()