You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No matter whatever increased param you use, it generates an error. I tried 10000, it doesn't work. Looks like the developer is no longer active here. I'm a an amateur but looks like someone is working on the token management:
Processing Non-Immigrant Visa Classifications Chart-1.pdf ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00
Traceback (most recent call last):
File "/Users/tipe/Aktuell/4MAC/BIN & APP/_Scripts/Local-File-Organizer/main.py", line 337, in
main()
File "/Users/tipe/Aktuell/4MAC/BIN & APP/_Scripts/Local-File-Organizer/main.py", line 252, in main
data_texts = process_text_files(text_tuples, text_inference, silent=silent_mode, log_file=log_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tipe/Aktuell/4MAC/BIN & APP/_Scripts/Local-File-Organizer/text_data_processing.py", line 60, in process_text_files
data = process_single_text_file(args, text_inference, silent=silent, log_file=log_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tipe/Aktuell/4MAC/BIN & APP/_Scripts/Local-File-Organizer/text_data_processing.py", line 37, in process_single_text_file
foldername, filename, description = generate_text_metadata(text, file_path, progress, task_id, text_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tipe/Aktuell/4MAC/BIN & APP/_Scripts/Local-File-Organizer/text_data_processing.py", line 71, in generate_text_metadata
description = summarize_text_content(input_text, text_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tipe/Aktuell/4MAC/BIN & APP/_Scripts/Local-File-Organizer/text_data_processing.py", line 21, in summarize_text_content
response = text_inference.create_completion(prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/local_file_organizer/lib/python3.12/site-packages/nexa/gguf/nexa_inference_text.py", line 277, in create_completion
return self.model.create_completion(prompt=prompt, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/local_file_organizer/lib/python3.12/site-packages/nexa/gguf/llama/llama.py", line 1785, in create_completion
completion: Completion = next(completion_or_chunks) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/local_file_organizer/lib/python3.12/site-packages/nexa/gguf/llama/llama.py", line 1201, in _create_completion
raise ValueError(
ValueError: Requested tokens (2288) exceed context window of 2048
The text was updated successfully, but these errors were encountered: