Skip to content
Triggered via push December 22, 2023 16:49
Status Failure
Total duration 6h 0m 27s
Artifacts
This run and associated checks have been archived and are scheduled for deletion. Learn more about checks retention

build.yml

on: push
Matrix: Build Arm Mac
Matrix: Build Intel Mac
Matrix: Build Windows
Fit to window
Zoom out
Zoom in

Annotations

1 error and 13 warnings
Build Intel Mac (macos-latest)
The operation was canceled.
Build Arm Mac (macos-latest)
The following actions uses node12 which is deprecated and will be forced to run on node16: actions/checkout@v2, actions/setup-node@v1. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/
Build Windows (windows-latest)
The following actions uses node12 which is deprecated and will be forced to run on node16: actions/checkout@v2, actions/setup-node@v1. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/
Build Windows (windows-latest): llama.cpp/ggml-backend.c#L1
'initializing': conversion from 'size_t' to 'int', possible loss of data [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\ggml.vcxproj]
Build Windows (windows-latest): llama.cpp/ggml-quants.c#L1
'=': conversion from 'float' to 'int8_t', possible loss of data [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\ggml.vcxproj]
Build Windows (windows-latest): llama.cpp/ggml-quants.c#L1
'=': conversion from 'float' to 'int8_t', possible loss of data [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\ggml.vcxproj]
Build Windows (windows-latest): llama.cpp/ggml-quants.c#L1
'=': conversion from 'float' to 'int8_t', possible loss of data [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\ggml.vcxproj]
Build Windows (windows-latest): llama.cpp/ggml-backend.c#L1
'get_allocr_backend': not all control paths return a value [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\ggml.vcxproj]
Build Windows (windows-latest): llama.cpp/llama.cpp#L1
'initializing': truncation from 'double' to 'float' [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\llama.vcxproj]
Build Windows (windows-latest): llama.cpp/llama.cpp#L1
character represented by universal-character-name '\u010A' cannot be represented in the current code page (1252) [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\llama.vcxproj]
Build Windows (windows-latest): llama.cpp/llama.cpp#L1
unary minus operator applied to unsigned type, result still unsigned [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\llama.vcxproj]
Build Windows (windows-latest): llama.cpp/llama.cpp#L1
unary minus operator applied to unsigned type, result still unsigned [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\llama.vcxproj]
Build Windows (windows-latest): llama.cpp/examples/llava/llava.cpp#L1
'initializing': conversion from 'double' to 'float', possible loss of data [D:\a\FreedomGPT\FreedomGPT\llama.cpp\build\examples\llava\llava.vcxproj]
Build Intel Mac (macos-latest)
The following actions uses node12 which is deprecated and will be forced to run on node16: actions/checkout@v2, actions/setup-node@v1. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/