Skip to content
This repository has been archived by the owner on Nov 21, 2022. It is now read-only.

HF compatibility issue #304

Closed
BenoitDalFerro opened this issue Nov 7, 2022 · 3 comments · Fixed by #306
Closed

HF compatibility issue #304

BenoitDalFerro opened this issue Nov 7, 2022 · 3 comments · Fixed by #306
Labels
bug / fix Something isn't working help wanted Extra attention is needed

Comments

@BenoitDalFerro
Copy link

BenoitDalFerro commented Nov 7, 2022

Suggest to assign to

@rohitgr7

🐛 Bug

model = MaskedLanguageModelingTransformer(pretrained_model_name_or_path=

Failure to pass the Huggingface model name

modelname='flaubert/flaubert_base_cased'
model = LanguageModelingTransformer(pretrained_model_name_or_path=modelname

Or the path to the locally saved version of a successful object instanciation such as

model = FlaubertWithLMHeadModel.from_pretrained(modelname)
model.save_pretrained('./saved_models/FlauBERT_test')
model = LanguageModelingTransformer(pretrained_model_name_or_path=modelpath

OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.

Note the logical divorce in between passing the model as a Huggingface model string name or local path and the next argument of passing a tokenizer as an object. Either/or we follow the first logic and the the model name or path suffices to instantiate calls to the relevant tokenizer as per the model name or local config.json OR one separately instantiates a model object which is passed as an argument. The second approach is much clearer to understand, transparent and less cumbersome to implement and finally less prone to compatibility breakage.

To Reproduce

modelname = 'flaubert/flaubert_base_cased'
modelpath = './saved_models/FlauBERT_test'
model = FlaubertWithLMHeadModel.from_pretrained(modelname).to(device)
model.save_pretrained(modelpath)
LM_tokenizer = FlaubertTokenizer.from_pretrained(pretrained_model_name_or_path=modelname, do_lowercase=False)

with init_empty_weights():
    model = MaskedLanguageModelingTransformer(
                                              pretrained_model=modelname, #modelpath
                                              tokenizer=FlaubertTokenizer.from_pretrained(pretrained_model_name_or_path=modelname, do_lowercase=False),
                                              load_weights=False,
                                              low_cpu_mem_usage=True,
                                              device_map="auto"
                                              #deepspeed_sharding=True,  # Linux only, defer initialization of the model to shard/load pre-train weights
                                              )
> --------------------------------------------------------------------------
> HTTPError                                 Traceback (most recent call last)
> File ~\miniconda3\envs\MyEnv\lib\site-packages\huggingface_hub\utils\_errors.py:213, in hf_raise_for_status(response, endpoint_name)
>     212 try:
> --> 213     response.raise_for_status()
>     214 except HTTPError as e:
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\requests\models.py:1021, in Response.raise_for_status(self)
>    1020 if http_error_msg:
> -> 1021     raise HTTPError(http_error_msg, response=self)
> 
> HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
> 
> The above exception was the direct cause of the following exception:
> 
> RepositoryNotFoundError                   Traceback (most recent call last)
> File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
>     407 try:
>     408     # Load from URL or cache if already cached
> --> 409     resolved_file = hf_hub_download(
>     410         path_or_repo_id,
>     411         filename,
>     412         subfolder=None if len(subfolder) == 0 else subfolder,
>     413         revision=revision,
>     414         cache_dir=cache_dir,
>     415         user_agent=user_agent,
>     416         force_download=force_download,
>     417         proxies=proxies,
>     418         resume_download=resume_download,
>     419         use_auth_token=use_auth_token,
>     420         local_files_only=local_files_only,
>     421     )
>     423 except RepositoryNotFoundError:
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\huggingface_hub\file_download.py:1053, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout)
>    1052 try:
> -> 1053     metadata = get_hf_file_metadata(
>    1054         url=url,
>    1055         use_auth_token=use_auth_token,
>    1056         proxies=proxies,
>    1057         timeout=etag_timeout,
>    1058     )
>    1059 except EntryNotFoundError as http_error:
>    1060     # Cache the non-existence of the file and raise
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\huggingface_hub\file_download.py:1359, in get_hf_file_metadata(url, use_auth_token, proxies, timeout)
>    1350 r = _request_wrapper(
>    1351     method="HEAD",
>    1352     url=url,
>    (...)
>    1357     timeout=timeout,
>    1358 )
> -> 1359 hf_raise_for_status(r)
>    1361 # Return
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\huggingface_hub\utils\_errors.py:242, in hf_raise_for_status(response, endpoint_name)
>     234     message = (
>     235         f"{response.status_code} Client Error."
>     236         + "\n\n"
>    (...)
>     240         + "\nIf the repo is private, make sure you are authenticated."
>     241     )
> --> 242     raise RepositoryNotFoundError(message, response) from e
>     244 elif response.status_code == 400:
> 
> RepositoryNotFoundError: 401 Client Error. (Request ID: Rw-R4i-FiiczgT3V91VYq)
> 
> Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
> Please make sure you specified the correct `repo_id` and `repo_type`.
> If the repo is private, make sure you are authenticated.
> 
> During handling of the above exception, another exception occurred:
> 
> OSError                                   Traceback (most recent call last)
> Input In [16], in <cell line: 1>()
>       1 with init_empty_weights():
> ----> 2     model = MaskedLanguageModelingTransformer(
>       3                                               pretrained_model=modelname,
>       4                                               tokenizer=FlaubertTokenizer.from_pretrained(pretrained_model_name_or_path=modelname, do_lowercase=False),
>       5                                               load_weights=False,
>       6                                               low_cpu_mem_usage=True,
>       7                                               device_map="auto"
>       8                                               #deepspeed_sharding=True,  # Linux only, defer initialization of the model to shard/load pre-train weights
>       9                                               )
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\lightning_transformers\task\nlp\masked_language_modeling\model.py:34, in MaskedLanguageModelingTransformer.__init__(self, downstream_model_type, *args, **kwargs)
>      31 def __init__(
>      32     self, *args, downstream_model_type: Type[_BaseAutoModelClass] = transformers.AutoModelForMaskedLM, **kwargs
>      33 ) -> None:
> ---> 34     super().__init__(downstream_model_type, *args, **kwargs)
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\lightning_transformers\core\model.py:64, in TaskTransformer.__init__(self, downstream_model_type, pretrained_model_name_or_path, tokenizer, pipeline_kwargs, load_weights, deepspeed_sharding, **model_data_kwargs)
>      62 self.pretrained_model_name_or_path = pretrained_model_name_or_path
>      63 if not self.deepspeed_sharding:
> ---> 64     self.initialize_model(self.pretrained_model_name_or_path)
>      65 self._tokenizer = tokenizer  # necessary for hf_pipeline
>      66 self._hf_pipeline = None
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\lightning_transformers\core\model.py:79, in TaskTransformer.initialize_model(self, pretrained_model_name_or_path)
>      75     self.model = self.downstream_model_type.from_pretrained(
>      76         pretrained_model_name_or_path, **self.model_data_kwargs
>      77     )
>      78 else:
> ---> 79     config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **self.model_data_kwargs)
>      80     self.model = self.downstream_model_type.from_config(config)
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\models\auto\configuration_auto.py:776, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
>     774 kwargs["name_or_path"] = pretrained_model_name_or_path
>     775 trust_remote_code = kwargs.pop("trust_remote_code", False)
> --> 776 config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
>     777 if "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]:
>     778     if not trust_remote_code:
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\configuration_utils.py:559, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
>     557 original_kwargs = copy.deepcopy(kwargs)
>     558 # Get config dict associated with the base config file
> --> 559 config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
>     560 if "_commit_hash" in config_dict:
>     561     original_kwargs["_commit_hash"] = config_dict["_commit_hash"]
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\configuration_utils.py:614, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
>     610 configuration_file = kwargs.pop("_configuration_file", CONFIG_NAME)
>     612 try:
>     613     # Load from local folder or from cache or download from model Hub and cache
> --> 614     resolved_config_file = cached_file(
>     615         pretrained_model_name_or_path,
>     616         configuration_file,
>     617         cache_dir=cache_dir,
>     618         force_download=force_download,
>     619         proxies=proxies,
>     620         resume_download=resume_download,
>     621         local_files_only=local_files_only,
>     622         use_auth_token=use_auth_token,
>     623         user_agent=user_agent,
>     624         revision=revision,
>     625         subfolder=subfolder,
>     626         _commit_hash=commit_hash,
>     627     )
>     628     commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
>     629 except EnvironmentError:
>     630     # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to
>     631     # the original exception.
> 
> File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\hub.py:424, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
>     409     resolved_file = hf_hub_download(
>     410         path_or_repo_id,
>     411         filename,
>    (...)
>     420         local_files_only=local_files_only,
>     421     )
>     423 except RepositoryNotFoundError:
> --> 424     raise EnvironmentError(
>     425         f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
>     426         "listed on '[https://huggingface.co/models'\nIf](https://huggingface.co/models'/nIf) this is a private repository, make sure to "
>     427         "pass a token having permission to this repo with `use_auth_token` or log in with "
>     428         "`huggingface-cli login` and pass `use_auth_token=True`."
>     429     )
>     430 except RevisionNotFoundError:
>     431     raise EnvironmentError(
>     432         f"{revision} is not a valid git identifier (branch name, tag name or commit id) that exists "
>     433         "for this model name. Check the model page at "
>     434         f"'[https://huggingface.co/{](https://huggingface.co/%7Bpath_or_repo_id)[path_or_repo_id](https://huggingface.co/%7Bpath_or_repo_id)}' for available revisions."
>     435     )
> 
> OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
> If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.

Code sample

# -*- coding: utf-8 -*-
import os
from accelerate import (init_empty_weights)
from transformers import (FlaubertTokenizer, FlaubertWithLMHeadModel, TrainingArguments, DataCollatorForLanguageModeling)
from datasets import (load_dataset)
import pytorch_lightning as pl
from lightning_transformers.task.nlp.masked_language_modeling import (MaskedLanguageModelingTransformer, MaskedLanguageModelingDataModule)
import torch
from tqdm.notebook import tqdm
torch.cuda.empty_cache()
dtype = torch.cuda.FloatTensor
torch.backends.cudnn.benchmark = True
n_gpus = torch.cuda.device_count()
if n_gpus <= 1:
    local_rank = -1
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
modelname = 'flaubert/flaubert_base_cased'
modelpath = './saved_models/FlauBERT_test'
model = FlaubertWithLMHeadModel.from_pretrained(modelname).to(device)
model.save_pretrained(modelpath)
LM_tokenizer = FlaubertTokenizer.from_pretrained(pretrained_model_name_or_path=modelname, do_lowercase=False)
with init_empty_weights():
    model = MaskedLanguageModelingTransformer(
                                              pretrained_model=modelname, #modelpath
                                              tokenizer=FlaubertTokenizer.from_pretrained(pretrained_model_name_or_path=modelname, do_lowercase=False),
                                              load_weights=False,
                                              low_cpu_mem_usage=True,
                                              device_map="auto"
                                              #deepspeed_sharding=True,  # Linux only, defer initialization of the model to shard/load pre-train weights
                                              )

further code not executed owing to above new issue

datamodel = MaskedLanguageModelingDataModule(
    batch_size=2,
    dataset_name="wikitext",
    dataset_config_name="wikitext-2-raw-v1",
    tokenizer=LM_tokenizer,
)

trainer = pl.Trainer(
    accelerator="auto",
    devices="auto",
    #strategy="deepspeed_stage_3",
    precision=16,
    max_epochs=1,
    #strategy='dp',
    #auto_lr_find=True,
    #detect_anomaly=True
    #val_check_interval=0
    #progress_bar_refresh_rate=50
)
trainer.fit(model, datamodel)

Expected behavior

Environment

Environment
Lightning-Transformers Version: 0.2.4
PyTorch Version (e.g., 1.0): 1.2.1
OS (e.g., Linux): Windows 10
How you installed PyTorch (conda, pip, source): conda

py3.9_cuda11.6_cudnn8_0 pytorch
Python version: 3.9
CUDA/cuDNN version: CUDA 11.6, cuDNN 8.0
GPU models and configuration: NVIDIA Quadro RTX 3000
Any other relevant information: none
Additional context

Additional context

Comes on top of 0.2.3 and despite 0.2.4 release and issue whilst passing object

@BenoitDalFerro BenoitDalFerro added bug / fix Something isn't working help wanted Extra attention is needed labels Nov 7, 2022
@BenoitDalFerro
Copy link
Author

@Borda Borda changed the title OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True. HF compatibility issue Nov 7, 2022
@Borda
Copy link
Member

Borda commented Nov 7, 2022

@ethanwharris I think it was the same one you faced a while back with Flash, can you quickly point to the root? 🐰

@ethanwharris
Copy link
Contributor

Hi there, I'm afraid I don't remember seeing this issue before or what was needed to fix it if we did. Sorry I couldn't be of more help!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants