We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
23.07
Source
We originally pinned camouflage to 0.9 due to testinggospels/camouflage#203 but those fixes were rolled into 0.13.0 .
npm install -g camouflage-server pytest -x --run_slow
tests/test_abp.py .F ================================================================================= FAILURES ================================================================================== ___________________________________________________________________________ test_abp_cpp[use_cpp] ___________________________________________________________________________ config = Config(debug=False, log_level=30, log_config_file=None, plugins=None, mode=<PipelineModes.FIL: 'FIL'>, feature_length=...m_clock', 'nvidia_smi_log.gpu.max_clocks.video_clock', 'nvidia_smi_log.gpu.max_customer_boost_clocks.graphics_clock'])) tmp_path = PosixPath('/tmp/pytest-of-dagardner/pytest-15/test_abp_cpp_use_cpp_0') @pytest.mark.slow @pytest.mark.use_cpp @pytest.mark.usefixtures("launch_mock_triton") def test_abp_cpp(config, tmp_path): config.mode = PipelineModes.FIL config.class_labels = ["mining"] config.model_max_batch_size = MODEL_MAX_BATCH_SIZE config.pipeline_batch_size = 1024 config.feature_length = FEATURE_LENGTH config.edge_buffer_size = 128 config.num_threads = 1 config.fil = ConfigFIL() config.fil.feature_columns = load_labels_file(os.path.join(TEST_DIRS.data_dir, 'columns_fil.txt')) val_file_name = os.path.join(TEST_DIRS.validation_data_dir, 'abp-validation-data.jsonlines') out_file = os.path.join(tmp_path, 'results.csv') results_file_name = os.path.join(tmp_path, 'results.json') pipe = LinearPipeline(config) pipe.set_source(FileSourceStage(config, filename=val_file_name, iterative=False)) pipe.add_stage(DeserializeStage(config)) pipe.add_stage(PreprocessFILStage(config)) # We are feeding TritonInferenceStage the port to the grpc server because that is what the validation tests do # but the code under-the-hood replaces this with the port number of the http server pipe.add_stage( TritonInferenceStage(config, model_name='abp-nvsmi-xgb', server_url='localhost:8001', force_convert_inputs=True)) pipe.add_stage(MonitorStage(config, description="Inference Rate", smoothing=0.001, unit="inf")) pipe.add_stage(AddClassificationsStage(config)) pipe.add_stage(AddScoresStage(config, prefix="score_")) pipe.add_stage( ValidationStage(config, val_file_name=val_file_name, results_file_name=results_file_name, rel_tol=0.05)) pipe.add_stage(SerializeStage(config)) pipe.add_stage(WriteToFileStage(config, filename=out_file, overwrite=False)) > pipe.run() tests/test_abp.py:155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ morpheus/pipeline/pipeline.py:598: in run asyncio.run(self.run_async()) ../conda/envs/m2/lib/python3.10/asyncio/runners.py:44: in run return loop.run_until_complete(main) ../conda/envs/m2/lib/python3.10/asyncio/base_events.py:649: in run_until_complete return future.result() morpheus/pipeline/pipeline.py:576: in run_async await self.join() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <morpheus.pipeline.linear_pipeline.LinearPipeline object at 0x7f8f9a7cb610> async def join(self): """ Suspend execution all currently running stages and the MRC pipeline. Typically called after `stop`. """ try: > await self._mrc_executor.join_async() E RuntimeError: Triton Error while executing 'client->Infer(&results, m_options, inputs, outputs)'. Error: failed to parse the request JSON buffer: Invalid value. at 0 E ../morpheus/_lib/src/stages/triton_inference.cpp(306) morpheus/pipeline/pipeline.py:327: RuntimeError
No response
The text was updated successfully, but these errors were encountered:
Adopt updated camouflage-server & fix test_dfp_mlflow_model_writer (#…
aab0d96
…1195) * Adopt camouflage-server 0.15, previously we've been locked on v0.9 due to outstanding bugs introduced in versions 0.10 - 0.14.1 : - testinggospels/camouflage#203 - testinggospels/camouflage#223 - testinggospels/camouflage#227 - testinggospels/camouflage#229 * Includes unrelated fix to running CI locally * Restrict mlflow to versions prior to 2.7 closes #967 Fixes #1192 ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/nv-morpheus/Morpheus/blob/main/docs/source/developer_guide/contributing.md). - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - Christopher Harris (https://github.com/cwharris) URL: #1195
Successfully merging a pull request may close this issue.
Version
23.07
Which installation method(s) does this occur on?
Source
Describe the bug.
We originally pinned camouflage to 0.9 due to testinggospels/camouflage#203 but those fixes were rolled into 0.13.0 .
Minimum reproducible example
Relevant log output
Full env printout
No response
Other/Misc.
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: