Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'DualStreamProcessor' object has no attribute 'flush' #207

Closed
pseudotensor opened this issue May 30, 2023 · 0 comments
Closed

Comments

@pseudotensor
Copy link
Collaborator

GPT4All does some horrible wrapping of all prints for streaming, so have to avoid prints once model loaded in case race with a print from main gradio block

Running on local URL:  http://0.0.0.0:7860
Distance: min: 1.2514867782592773 max: 1.388384461402893 mean: 1.315113604068756 median: 1.310291588306427
Running on public URL: https://24eb03829b71ef3927.gradio.liveThis share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spacesStarted GUIKilling tunnel 0.0.0.0:7860 <> https://24eb03829b71ef3927.gradio.live╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/jon/h2o-llm/generate.py:1504 in <module>                               │
│                                                                              │
│   1501 │                                                                     │
│   1502 │   python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6_9b  │
│   1503 │   """                                                               │
│ ❱ 1504 │   fire.Fire(main)                                                   │
│   1505                                                                       │
│                                                                              │
│ /home/jon/miniconda3/envs/h2ollm/lib/python3.10/site-packages/fire/core.py:1 │
│ 41 in Fire                                                                   │
│                                                                              │
│   138 │   context.update(caller_globals)                                     │
│   139 │   context.update(caller_locals)                                      │
│   140                                                                        │
│ ❱ 141   component_trace = _Fire(component, args, parsed_flag_args, context,  │
│   142                                                                        │
│   143   if component_trace.HasError():                                       │
│   144 │   _DisplayError(component_trace)                                     │
│                                                                              │
│ /home/jon/miniconda3/envs/h2ollm/lib/python3.10/site-packages/fire/core.py:4 │
│ 75 in _Fire                                                                  │
│                                                                              │
│   472 │     is_class = inspect.isclass(component)                            │
│   473 │                                                                      │
│   474 │     try:                                                             │
│ ❱ 475 │   │   component, remaining_args = _CallAndUpdateTrace(               │
│   476 │   │   │   component,                                                 │
│   477 │   │   │   remaining_args,                                            │
│   478 │   │   │   component_trace,                                           │
│                                                                              │
│ /home/jon/miniconda3/envs/h2ollm/lib/python3.10/site-packages/fire/core.py:6 │
│ 91 in _CallAndUpdateTrace                                                    │
│                                                                              │
│   688 │   loop = asyncio.get_event_loop()                                    │
│   689 │   component = loop.run_until_complete(fn(*varargs, **kwargs))        │
│   690   else:                                                                │
│ ❱ 691 │   component = fn(*varargs, **kwargs)                                 │
│   692                                                                        │
│   693   if treatment == 'class':                                             │
│   694 │   action = trace.INSTANTIATED_CLASS                                  │
│                                                                              │
│ /home/jon/h2o-llm/generate.py:430 in main                                    │
│                                                                              │
│    427 │   │   │   caption_loader = False                                    │
│    428 │   │                                                                 │
│    429 │   │   # assume gradio needs everything                              │
│ ❱  430 │   │   go_gradio(**locals())                                         │
│    431                                                                       │
│    432                                                                       │
│    433 def get_non_lora_model(base_model, model_loader, load_half, model_kwa │
│                                                                              │
│ /home/jon/h2o-llm/gradio_runner.py:1429 in go_gradio                         │
│                                                                              │
│   1426 │   demo.launch(share=kwargs['share'], server_name="0.0.0.0", show_er │
│   1427 │   │   │   │   favicon_path=favicon_path, prevent_thread_lock=True,  │
│   1428 │   │   │   │   auth=kwargs['auth'])                                  │
│ ❱ 1429 │   print("Started GUI", flush=True)                                  │
│   1430 │   if kwargs['block_gradio_exit']:                                   │
│   1431 │   │   demo.block_thread()                                           │
│   1432                                                                       │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'DualStreamProcessor' object has no attribute 'flush'
Exception ignored in: <gpt4all.pyllmodel.DualStreamProcessor object at 0x7f332460d090>
AttributeError: 'DualStreamProcessor' object has no attribute 'flush'

Process finished with exit code 120
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant