-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
4.8.0 单表列表 性能优化前后对比 #315
Comments
主要因为省掉了 AbstractObjectParser.onSQLExecute 878 行 for 循环 put 100 条数据(有日志) 3ms-8ms (MacBook Pro(13 英寸,2015 年初期)2.9 GHz Intel Core i5 16 GB 1867 MHz DDR3 MacOS 10.11.6 ) 以及 AbstractParser.onArrayParse 中用 1 次 addAll 替代 100 次 add 减少不必要的 ArrayList 频繁校验和扩容 |
腾讯 CSIG 某项目性能测试结果1. 测试环境1.1 机器配置腾讯云tke docker pod, 4 核 / 16G。 1.2 db机器配置腾讯云8核32000MB内存,805GB存储空间 1.3 测试表建表DML、数据量(mysql 5.7)CREATE TABLE `t_xxxx_xxxx` (
`x_id` bigint(11) unsigned NOT NULL AUTO_INCREMENT,
`x_xxxx_id` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xID',
`x_xid` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xxID',
`x_xx_id` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xID',
`x_xxxx_id` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xxxID',
`x_xxxxx_id` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xxID',
`x_xxxx_id` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xxID',
`x_uin` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT 'xxuin',
`x_send_time` datetime DEFAULT NULL COMMENT '推送消息时间',
`x_xxxx_result` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xx结果',
`x_xxx_xxxx_result` varchar(255) DEFAULT '' COMMENT 'xx结果',
`x_result` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '0错误 1正确 2未设置',
`x_create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间, 落地时间',
`x_credit` int(11) unsigned NOT NULL DEFAULT '0' COMMENT 'xx数量',
`x_xxxxxx_xxx_id` varchar(32) NOT NULL COMMENT '公共参数, 上报应用',
`x_xxxxxx_source` varchar(32) NOT NULL COMMENT '公共参数, 上报服务名',
`x_xxxxxx_server` varchar(32) NOT NULL COMMENT '公共参数, 上报服务端ip',
`x_xxxxxx_event_time` datetime NOT NULL COMMENT '公共参数, 上报时间',
`x_xxxxxx_client` varchar(32) NOT NULL COMMENT '公共参数, 客户端ip',
`x_xxxxxx_trace_id` varchar(64) NOT NULL COMMENT '公共参数',
`x_xxxxxx_sdk` varchar(16) NOT NULL COMMENT '公共参数, sdk版本',
PRIMARY KEY (`x_id`, `x_uin`),
UNIQUE KEY `udx_uid_xxxxid` (`x_uin`, `x_xxxx_id`),
KEY `idx_xid` (`x_xid`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COMMENT = 'xx事件表';
1.4 日志打印设置Log.DEBUG = false;
AbstractParser.IS_PRINT_REQUEST_STRING_LOG = false;
AbstractParser.IS_PRINT_REQUEST_ENDTIME_LOG = false;
AbstractParser.IS_PRINT_BIG_LOG = false; 2. 测试脚本 (使用Table[]: {Table: {}}格式)脚本统计方式:
脚本:apitest.sh #!/bin/bash
printf -- '--------------------------\n开始不带where条件的情况测试\n'
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":100000, "T_xxxx_xxxx":{"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 10w_no_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":200000, "T_xxxx_xxxx":{"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 20w_no_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":500000, "T_xxxx_xxxx":{"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 50w_no_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":800000, "T_xxxx_xxxx":{"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 80w_no_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":1000000, "T_xxxx_xxxx":{"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 100w_no_where.log
printf -- '--------------------------\n开始带where条件的情况测试\n'
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":100000, "T_xxxx_xxxx":{"x_xid{}":[xxxx36,xxxx38],"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 10w_with_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":200000, "T_xxxx_xxxx":{"x_xid{}":[xxxx36,xxxx38],"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 20w_with_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":500000, "T_xxxx_xxxx":{"x_xid{}":[xxxx36,xxxx38],"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 50w_with_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":800000, "T_xxxx_xxxx":{"x_xid{}":[xxxx36,xxxx38],"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 80w_with_where.log
time curl -X POST -H 'Content-Type:application/json' 'http://x.xxx.xx.xxx:xxxx/get' -d '{"T_xxxx_xxxx[]":{"count":1000000, "T_xxxx_xxxx":{"x_xid{}":[xxxx36,xxxx38],"@column":"x_uin,x_send_time,x_xxxx_id,x_xid,x_xx_id,x_xxxxx_id,x_xxxx_result,x_result,x_credit"}}}' > 100w_with_where.log 也就是 MySQL 5.7 共 1.9KW 记录的大表,统计 CRUL 10-20M/s 网速从发起请求到接收完回包的总时长
|
以之前的 APIJSON 4.6.0 连接 2.3KW 大表带条件查出 12W+ 数据来估计: ---------- | ---------- | ---------- | ---------- | ------------- /get >> http请求结束:5624 4.6.0-4.7.2 都没有明显的性能优化,所以 4.7.0 只花了约 2s 应该是因为换了张表,平均每行数据量减少了约 65% 为原来的 35%。 两种方式估算结果基本一致,也可以按这个 35% 新旧表平均每行数据量比例估算排除网络耗时后的整个服务耗时: APIJSON 4.6.0 查原来 2.3KW 大表中 12W+ 数据量服务耗时 = 总耗时 5.624s - 数据 72.5M/下载速度 20.0Mbps = 2.00s; |
腾讯业务百万数据 6s 响应,APIJSON 性能优化背后的故事 |
经过 ed036ef 这次优化,
4.8.0 对比 4.7.2 在 Log.DEBUG = true(开启日志)的情况下:
TestRecord[] 耗时降低至原来 73%,性能提升 27% 至原来 1.3 倍;
Moment[] 耗时降低至原来 80%,性能提升 20% 至原来 1.2 倍;
朋友圈列表耗时降低至原来 81%,性能提升 19% 至原来 1.2 倍。
其中每个数组都按 100 条来测试,如果每页数量更大或每项数据量更大,则提升会更加明显。
The text was updated successfully, but these errors were encountered: