Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update #5

Merged
merged 117 commits into from
May 25, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
117 commits
Select commit Hold shift + click to select a range
db5eac2
add timeout for queue.get (#32747)
chenwhql May 7, 2021
7468253
model_benchmark (#32600)
tianshuo78520a May 7, 2021
b2160e7
add other 15 activation ops (#32622)
ZzSean May 7, 2021
816afb9
fix distro (#32771)
tianshuo78520a May 7, 2021
8ce6b39
Fix compile error on jetson platform (#32748)
LielinJiang May 7, 2021
3753416
fix distro in manylinux (#32784)
tianshuo78520a May 7, 2021
a77ade0
remove packages in __all__ (#32759)
zhiboniu May 7, 2021
c1c18b0
Add raw program meta optimizer (#32597)
May 8, 2021
62d848d
[Paddle-TRT]fix trt-converter-fc_op (#32671)
Wangzheee May 8, 2021
e8e4a9c
bugfix: parallel_executor for xpu should use BindThreadedSSAGraphExec…
houj04 May 8, 2021
8a42b1f
Remove np Deprecation Warning since `np.bool` is alias of `bool` (#32…
zhhsplendid May 8, 2021
4628b6f
[NPU] refine update_loss_scaling npu kernel (#32580)
pangyoki May 8, 2021
c8affff
add c_identity op npu (#32787)
Baibaifan May 8, 2021
beab956
【heterps】support cuda11 for heterps; add profiler in oneps (#32640)
danleifeng May 8, 2021
23ab01e
Dynamic amp support sync_batch_norm op (#32770)
sljlp May 10, 2021
92adece
make check_op_desc.py support python3 (#32807)
tianshuo78520a May 10, 2021
fd9a236
fix paddle_build bug (#32813)
pangyoki May 10, 2021
5fc734c
Revert "make check_op_desc.py support python3 (#32807)" (#32818)
tianshuo78520a May 10, 2021
e357cfd
update unittest for uint8 problem (#32790)
lyuwenyu May 10, 2021
5aa8faa
fix npu compile error (#32820)
zhiqiu May 10, 2021
fbbc339
[pslib] pslib with cmake (#32800)
Thunderbrook May 10, 2021
3419de5
Support different data type between input and output (#32823)
ZzSean May 10, 2021
1eb59ef
modify en_doco of spectral norm test=document_fix (#32812)
wangna11BD May 10, 2021
400eb9d
fix ce bug in label value, test=develop
huangjun12 Apr 26, 2021
e2c293f
fix ci bug
huangjun12 Apr 27, 2021
6cd96c1
fix ci coverage
huangjun12 Apr 27, 2021
ef7e5fc
imporve efficiency
huangjun12 Apr 27, 2021
e1ea895
fix ci coverage bug
huangjun12 Apr 27, 2021
9495211
add weigth data to unit test
huangjun12 Apr 27, 2021
9cdf6bd
add ignore_index for test case
huangjun12 Apr 28, 2021
a9e5305
fix ci bug
huangjun12 Apr 29, 2021
298f210
Support control flow in DataParallel (#32826)
ForFishes May 11, 2021
93fce18
fix rccl bug (#32808)
xymyeah May 11, 2021
84eca16
fix cmake expressions error, test=develop (#32815)
Avin0323 May 11, 2021
1ef2327
Fix retry error in download when exception occurs (#32816)
lyuwenyu May 11, 2021
2611ed2
Polish unittest test_decoupled_py_reader (#32832)
chenwhql May 12, 2021
067f558
add varbasecopy func to fix the ParamBase type bug in layers.to API (…
MingMingShangTian May 12, 2021
cefc063
change check_op_desc.py to py3 (#32825)
tianshuo78520a May 12, 2021
e1a4c83
Simple authors change (#32763)
andreazanetti May 12, 2021
890f626
Optimize/fleet save (#32817)
seiriosPlus May 12, 2021
6b3bb79
[NPU] Support npu pinned allocator and manage Tensor on NPUPinnedPlac…
liym27 May 12, 2021
f1d6302
fix dataloader exit hang when join re-enter (#32827)
heavengate May 12, 2021
85512d6
[NPU] Support async copy for TensorFromVector with event (#32563)
liym27 May 12, 2021
e676360
Fix comments in framework (#32861)
lyuwenyu May 12, 2021
eff84a5
Fix error in device `__all__` (#32860)
lyuwenyu May 12, 2021
24ffcd0
fix the error of fake_quant_dequant op name (#32866)
juncaipeng May 12, 2021
eeca963
Polish Windows CI and open the normal GPU unittest on CI (#32794)
zhwesky2010 May 12, 2021
b60ab6b
change unique op VisitaDataType from small to tiny (#32872)
MingMingShangTian May 13, 2021
3e47eee
fix stack grad gpu (#32781) (#32877)
bjjwwang May 13, 2021
c3ae0d4
solved some npu bugs (#32793)
Baibaifan May 13, 2021
48fc16f
add varbase_copy support CUDAPinnedPlace (#32883)
MingMingShangTian May 13, 2021
dace3fd
[NPU] support global accumulator for adam (#32780)
zhiqiu May 13, 2021
a8625aa
fix wait server ready (#32889)
wangxicoding May 13, 2021
096b2f5
Polish code for _getitem_impl_ (#32868)
liym27 May 14, 2021
b035c8b
Optimization the broadcast performance of elementwise_add (#32512)
JamesLim-sy May 14, 2021
e48091d
[Dy2Static]Add param_guard in ParameterList to support @to_static
Aurelius84 May 14, 2021
2d9d8f5
solove_matmulv2_npu_bugs (#32896)
Baibaifan May 14, 2021
6473e27
fix launch port already in use (#32892)
wangxicoding May 14, 2021
ed9e772
test=document_fix (#32906)
tianshuo78520a May 14, 2021
42aad30
use the `required` instruction to determine if the environment fits t…
wadefelix May 14, 2021
479689f
[oneDNN] Refactoring of softmax grad onednn kernel to match common AP…
jczaja May 14, 2021
c4787d7
Fix four error messages (#32899)
Kqnonrime May 14, 2021
15b05c7
[Custom Op]Remove PADDLE_WITH_MKLDNN in custom_op (#32903)
Aurelius84 May 14, 2021
62f6550
Doc of paddle.save/load (#32900)
hbwx24 May 14, 2021
e89fb25
Set the default value of protocol to 4. (#32904)
hbwx24 May 14, 2021
a3debea
change ut file map (#32841)
lelelelelez May 14, 2021
585564d
solove_device_guard_bugs (#32915)
Baibaifan May 17, 2021
5f1c07d
BugFix with ParseInputDataType from LodTensorArray (#32918)
Aurelius84 May 17, 2021
906db71
remove redundant graph files (#32924)
seemingwang May 17, 2021
c809530
[HybridParallel]Fix precision problem of model parallel (#32897)
ForFishes May 17, 2021
c72ed82
fix the paddle compare op for the broadcast when the element equal (#…
wawltor May 18, 2021
59997d5
notest;test=zcltest (#32821)
lelelelelez May 18, 2021
29bbeb0
unit double (#32902)
Thunderbrook May 18, 2021
59b74ee
update kunlun bkcl to support multi-machine (#32577)
QingshuChen May 18, 2021
b8d493d
[Dy2Static] Refactor param_guard logic of @to_static (#32867)
Aurelius84 May 18, 2021
b5882c6
[UnitTest]Enhance grep syntax to avoid random failed of test_dist_mni…
Aurelius84 May 18, 2021
bcd40f2
relu supports bfloat16 data type (#32542)
Avin0323 May 18, 2021
53580bb
add unit8 for concat (#32850)
vslyu May 18, 2021
5d62748
Update paths to Quant models (#32870)
wozna May 18, 2021
c66586b
[NPU] fix accuracy npu op bug and change top_k's output to int64 (#32…
pangyoki May 18, 2021
d7d7fae
Fix Link unittest exe random fail (#32891)
zhwesky2010 May 19, 2021
af89a94
add enforce check for set_value (#32972)
chenwhql May 19, 2021
56008aa
[oneDNN] Pool softmax and LRN access to cache optimized (#32922)
jczaja May 19, 2021
c285261
[Dy2Stat]BugFix StaticAanlysis with gast.Subscript (#32969)
Aurelius84 May 19, 2021
f0b2f59
remove ut from parallel_ut list (#32788)
XieYunshen May 19, 2021
67c2700
Optimize 102Flowers dataset reading speed (#31408)
GT-ZhangAcer May 19, 2021
aa4a56f
[Rocm] fix test of random_crop_op & logsumexp (#32824)
Ray2020BD May 19, 2021
1e1600e
fix the jetson allocator strategy, test=develop (#32932)
Shixiaowei02 May 19, 2021
7896b51
CI skip inference test if only python files modified (#32962)
Avin0323 May 19, 2021
6f8de31
fix test_paddle_save_load and test_paddle_save_load_binary (#32949)
hbwx24 May 19, 2021
1494952
Binary functor envoking of elementwise broadcast (#32928)
JamesLim-sy May 20, 2021
7e27b5a
handle remove files in pr (#32940)
lelelelelez May 20, 2021
8854786
remove unused shell (#32954)
seemingwang May 20, 2021
738bf20
Add complex template type (#32857)
MingMingShangTian May 20, 2021
848cabf
Polish code for setitem and getitem (#32911)
liym27 May 20, 2021
e409c7c
[Dy2Stat]Support convert sublayers in Sequential Container (#32978)
Aurelius84 May 20, 2021
be8e94a
revert_matmulv2_npu (#33014)
Baibaifan May 20, 2021
a96e8bc
fix gather op and add logsumexp op on kunlun (#32931)
tangzhiyi11 May 20, 2021
70dc5f4
[NPU] cast indices and label if their type is not consistent in accur…
pangyoki May 21, 2021
44668a7
update conda build script for cuda11 (#29594)
XieYunshen May 21, 2021
a85eddd
paddle.to_tensor supports LoDTensor (#33027)
zhiqiu May 21, 2021
0e5d832
fix model_benchmark ci (#33035)
tianshuo78520a May 21, 2021
7be6191
optimize softmax with cross entropy hard label (#32290)
xingfeng01 May 21, 2021
79ed717
add method for enhance pass,test=develop (#33004)
winter-wang May 21, 2021
79d918d
replace complex64/128 with complex template in cast Op (#33019)
MingMingShangTian May 21, 2021
e2a3a6f
Added oneDNN matmul grad BF16/FP32 kernel (#32968)
jakpiase May 22, 2021
a6dc68b
refine conv2d doc (#33045)
jerrywgz May 22, 2021
d6aea4a
Support OutType tmeplate argument in elementwise_broadcast branch (#…
JamesLim-sy May 24, 2021
60ac160
fix potential overflow problem & node add & node remove & node clear …
seemingwang May 24, 2021
d0d5586
open launch ps test=develop (#33044)
gongweibao May 24, 2021
b8e4ec7
[oneDNN] bump up oneDNN to 2.2.2 (#32685)
jczaja May 24, 2021
99a11e3
enhance unittest for yolo_box (#33070)
jerrywgz May 24, 2021
6ad5ece
Revert "fix model_benchmark ci (#33035)" (#33080)
tianshuo78520a May 24, 2021
4920c47
[HybridParallel]Fix pipeline in dygraph (#33007)
ForFishes May 24, 2021
88b43b5
Add a new high performance framework for reduce ops (#32697)
AnnaTrainingG May 25, 2021
86ea8dc
Added scale op FP32/BF16 FWD/BWD kernels (#32975)
jakpiase May 25, 2021
5fa44c3
modify Ops to complex template (#33041)
MingMingShangTian May 25, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
6 changes: 6 additions & 0 deletions AUTHORS.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
| Github account | name |
|---|---|
| abhinavarora | Abhinav Arora |
| andreazanetti | Andrea Zanetti |
| arlesniak | Artur Lesniak |
| arogowie-intel | Adam Osewski |
| backyes | Yan-Fei Wang |
| baiyfbupt | Yi-Fan Bai |
| beckett1124 | Bin Qi |
| ChengduoZH | Cheng-Duo Zhao|
| chengxiaohua1105 | Xiao-Hua Cheng |
| cxwangyi, yiwangbaidu, wangkuiyi | Yi Wang |
| cxysteven | Xing-Yi Cheng |
| ddokupil | Dariusz Dokupil |
| dzhwinter | Zhi-Hong Dong |
| dragonwarrior | Long Wang |
| dyning | Yuning Du |
Expand All @@ -21,6 +25,7 @@
| hedaoyuan | Dao-Yuan He |
| helinwang | He-Lin Wang |
| jacquesqiao | Long-Fei Qiao |
| jakpiase | Jakub Piasecki |
| [jczaja](https://raw.githubusercontent.com/jczaja/Paddle/paddle-poland-team/doc/images/paddle_poland_team.jpg) | Jacek Czaja |
| JiayiFeng | Jia-Yi Feng |
| kbinias | Krzysztof Binias |
Expand All @@ -42,6 +47,7 @@
| pakchoi | Chuan-Jiang Song |
| panyx0718 | Xin Pan |
| pengli09 | Peng Li |
| pmajchrzak |Piotr Majchrzak |
| pkuyym | Ya-Ming Yang |
| pzelazko-intel | Pawel Zelazko |
| [pawelpiotrowicz](https://raw.githubusercontent.com/jczaja/Paddle/paddle-poland-team/doc/images/paddle_poland_team.jpg) | Pawel Piotrowicz |
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide
## Communication

- [Github Issues](https://github.com/PaddlePaddle/Paddle/issues): bug reports, feature requests, install issues, usage issues, etc.
- QQ discussion group: 778260830 (PaddlePaddle).
- QQ discussion group: 793866180 (PaddlePaddle).
- [Forums](https://ai.baidu.com/forum/topic/list/168?pageNo=1): discuss implementations, research, etc.

## Copyright and License
Expand Down
2 changes: 1 addition & 1 deletion README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
## 交流与反馈

- 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/Paddle/issues)来提交问题、报告与建议
- QQ群: 778260830 (PaddlePaddle)
- QQ群: 793866180 (PaddlePaddle)
- [论坛](https://ai.baidu.com/forum/topic/list/168): 欢迎大家在PaddlePaddle论坛分享在使用PaddlePaddle中遇到的问题和经验, 营造良好的论坛氛围

## 版权和许可证
Expand Down
3 changes: 2 additions & 1 deletion cmake/external/mkldnn.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ SET(MKLDNN_SOURCE_DIR ${THIRD_PARTY_PATH}/mkldnn/src/extern_mkldnn)
SET(MKLDNN_INSTALL_DIR ${THIRD_PARTY_PATH}/install/mkldnn)
SET(MKLDNN_INC_DIR "${MKLDNN_INSTALL_DIR}/include" CACHE PATH "mkldnn include directory." FORCE)
SET(MKLDNN_REPOSITORY ${GIT_URL}/oneapi-src/oneDNN.git)
SET(MKLDNN_TAG f58682cd8bd0615f41d879f8afc8f1511ab42d24)
SET(MKLDNN_TAG f3999b71d8e4415c1985a0dfb812a3ed77ee21fa)


# Introduce variables:
# * CMAKE_INSTALL_LIBDIR
Expand Down
27 changes: 21 additions & 6 deletions cmake/external/warpctc.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,21 @@ if(WITH_ASCEND OR WITH_ASCEND_CL)
-DCMAKE_INSTALL_PREFIX:PATH=${WARPCTC_INSTALL_DIR}
)
else()
if(WIN32)
set(WARPCTC_C_FLAGS $<FILTER:${CMAKE_C_FLAGS},EXCLUDE,/Zc:inline>)
set(WARPCTC_C_FLAGS_DEBUG $<FILTER:${CMAKE_C_FLAGS_DEBUG},EXCLUDE,/Zc:inline>)
set(WARPCTC_C_FLAGS_RELEASE $<FILTER:${CMAKE_C_FLAGS_RELEASE},EXCLUDE,/Zc:inline>)
set(WARPCTC_CXX_FLAGS $<FILTER:${CMAKE_CXX_FLAGS},EXCLUDE,/Zc:inline>)
set(WARPCTC_CXX_FLAGS_RELEASE $<FILTER:${CMAKE_CXX_FLAGS_RELEASE},EXCLUDE,/Zc:inline>)
set(WARPCTC_CXX_FLAGS_DEBUG $<FILTER:${CMAKE_CXX_FLAGS_DEBUG},EXCLUDE,/Zc:inline>)
else()
set(WARPCTC_C_FLAGS ${CMAKE_C_FLAGS})
set(WARPCTC_C_FLAGS_DEBUG ${CMAKE_C_FLAGS_DEBUG})
set(WARPCTC_C_FLAGS_RELEASE ${CMAKE_C_FLAGS_RELEASE})
set(WARPCTC_CXX_FLAGS ${CMAKE_CXX_FLAGS})
set(WARPCTC_CXX_FLAGS_RELEASE ${CMAKE_CXX_FLAGS_RELEASE})
set(WARPCTC_CXX_FLAGS_DEBUG ${CMAKE_CXX_FLAGS_DEBUG})
endif()
ExternalProject_Add(
extern_warpctc
${EXTERNAL_PROJECT_LOG_ARGS}
Expand All @@ -90,12 +105,12 @@ else()
BUILD_ALWAYS 1
CMAKE_ARGS -DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_C_FLAGS=$<FILTER:${CMAKE_C_FLAGS},EXCLUDE,/Zc:inline>
-DCMAKE_C_FLAGS_DEBUG=$<FILTER:${CMAKE_C_FLAGS_DEBUG},EXCLUDE,/Zc:inline>
-DCMAKE_C_FLAGS_RELEASE=$<FILTER:${CMAKE_C_FLAGS_RELEASE},EXCLUDE,/Zc:inline>
-DCMAKE_CXX_FLAGS=$<FILTER:${CMAKE_CXX_FLAGS},EXCLUDE,/Zc:inline>
-DCMAKE_CXX_FLAGS_RELEASE=$<FILTER:${CMAKE_CXX_FLAGS_RELEASE},EXCLUDE,/Zc:inline>
-DCMAKE_CXX_FLAGS_DEBUG=$<FILTER:${CMAKE_CXX_FLAGS_DEBUG},EXCLUDE,/Zc:inline>
-DCMAKE_C_FLAGS=${WARPCTC_C_FLAGS}
-DCMAKE_C_FLAGS_DEBUG=${WARPCTC_C_FLAGS_DEBUG}
-DCMAKE_C_FLAGS_RELEASE=${WARPCTC_C_FLAGS_RELEASE}
-DCMAKE_CXX_FLAGS=${WARPCTC_CXX_FLAGS}
-DCMAKE_CXX_FLAGS_RELEASE=${WARPCTC_CXX_FLAGS_RELEASE}
-DCMAKE_CXX_FLAGS_DEBUG=${WARPCTC_CXX_FLAGS_DEBUG}
-DCMAKE_INSTALL_PREFIX=${WARPCTC_INSTALL_DIR}
-DWITH_GPU=${WITH_GPU}
-DWITH_ROCM=${WITH_ROCM}
Expand Down
2 changes: 1 addition & 1 deletion cmake/external/xpu.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ if(NOT XPU_SDK_ROOT)
elseif(WITH_SUNWAY)
SET(XPU_URL "https://baidu-kunlun-public.su.bcebos.com/paddle_depence/sunway/xpu_2021_01_13.tar.gz" CACHE STRING "" FORCE)
else()
SET(XPU_URL "https://baidu-kunlun-public.su.bcebos.com/paddle_depence/xpu_2021_04_09.tar.gz" CACHE STRING "" FORCE)
SET(XPU_URL "https://baidu-kunlun-public.su.bcebos.com/paddle_depence/xpu_2021_05_19.tar.gz" CACHE STRING "" FORCE)
endif()

SET(XPU_SOURCE_DIR "${THIRD_PARTY_PATH}/xpu")
Expand Down
8 changes: 3 additions & 5 deletions cmake/generic.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ function(cc_binary TARGET_NAME)
endfunction(cc_binary)

function(cc_test_build TARGET_NAME)
if(WITH_TESTING)
if(WITH_TESTING AND NOT "$ENV{CI_SKIP_CPP_TEST}" STREQUAL "ON")
set(oneValueArgs "")
set(multiValueArgs SRCS DEPS)
cmake_parse_arguments(cc_test "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
Expand All @@ -409,14 +409,12 @@ function(cc_test_build TARGET_NAME)
if(WITH_ROCM)
target_link_libraries(${TARGET_NAME} ${ROCM_HIPRTC_LIB})
endif()
check_coverage_opt(${TARGET_NAME} ${cc_test_SRCS})
endif()

check_coverage_opt(${TARGET_NAME} ${cc_test_SRCS})

endfunction()

function(cc_test_run TARGET_NAME)
if(WITH_TESTING)
if(WITH_TESTING AND NOT "$ENV{CI_SKIP_CPP_TEST}" STREQUAL "ON")
set(oneValueArgs "")
set(multiValueArgs COMMAND ARGS)
cmake_parse_arguments(cc_test "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
Expand Down
10 changes: 10 additions & 0 deletions cmake/init.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,16 @@ if(NOT WIN32)
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "-O2 -g -DNDEBUG")
set(CMAKE_CXX_FLAGS_MINSIZEREL "-Os -DNDEBUG")
else()
set(CMAKE_C_FLAGS_DEBUG "/Zi /DEBUG")
set(CMAKE_C_FLAGS_RELEASE "/O2 /DNDEBUG")
set(CMAKE_C_FLAGS_RELWITHDEBINFO "/O2 /DNDEBUG")
set(CMAKE_C_FLAGS_MINSIZEREL "/Os /DNDEBUG")

set(CMAKE_CXX_FLAGS_DEBUG "/Zi /DEBUG")
set(CMAKE_CXX_FLAGS_RELEASE "/O2 /DNDEBUG")
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "/O2 /DNDEBUG")
set(CMAKE_CXX_FLAGS_MINSIZEREL "/Os /DNDEBUG")

# It can specify CUDA compile flag manualy,
# its use is to remvoe /Zi to reduce GPU static library size. But it's dangerous
# because CUDA will update by nvidia, then error will occur.
Expand Down
3 changes: 3 additions & 0 deletions cmake/operators.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,9 @@ function(op_library TARGET)
if (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/${TARGET}.cu)
list(APPEND cu_srcs ${TARGET}.cu)
endif()
if (WITH_NV_JETSON)
list(REMOVE_ITEM cu_srcs "decode_jpeg_op.cu")
endif()
if (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/${TARGET}.part.cu)
set(PART_CUDA_KERNEL_FILES ${CMAKE_CURRENT_SOURCE_DIR}/${TARGET}.part.cu
${PART_CUDA_KERNEL_FILES} PARENT_SCOPE)
Expand Down
8 changes: 8 additions & 0 deletions cmake/third_party.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -261,6 +261,14 @@ if(WITH_PSLIB)
if(WITH_PSLIB_BRPC)
include(external/pslib_brpc) # download, build, install pslib_brpc
list(APPEND third_party_deps extern_pslib_brpc)
else()
include(external/snappy)
list(APPEND third_party_deps extern_snappy)

include(external/leveldb)
list(APPEND third_party_deps extern_leveldb)
include(external/brpc)
list(APPEND third_party_deps extern_brpc)
endif()
endif(WITH_PSLIB)

Expand Down
Loading