您的位置:首页 > 健康 > 养生 > 微信如何自己开发小程序_网站设计规范_优化大师会员兑换码_网络营销策略名词解释

微信如何自己开发小程序_网站设计规范_优化大师会员兑换码_网络营销策略名词解释

2024/12/24 0:31:57 来源:https://blog.csdn.net/weixin_40455124/article/details/144473059  浏览:    关键词:微信如何自己开发小程序_网站设计规范_优化大师会员兑换码_网络营销策略名词解释
微信如何自己开发小程序_网站设计规范_优化大师会员兑换码_网络营销策略名词解释

在这里插入图片描述

1 Using pretrained models

1.1 pipeline
1.1.1 from transformers import pipeline

camembert_fill_mask = pipeline("fill-mask", model="camembert-base")
results = camembert_fill_mask("Le camembert est <mask> :)")

1.2 instantiate the checkpoint using the model architecture directly

from transformers import CamembertTokenizer, CamembertForMaskedLMtokenizer = CamembertTokenizer.from_pretrained("camembert-base")
model = CamembertForMaskedLM.from_pretrained("camembert-base")

tokenizer and model must be same checkpoint
1.3 recommend using the Auto* classes instead, as these are by design architecture-agnostic(架构无关)
1.3.1 from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = AutoModelForMaskedLM.from_pretrained("camembert-base")

2 Sharing pretrained models
2.1 creating new model repositories
2.1.1 Using the push_to_hub API

notebook_login()
training_args = TrainingArguments("bert-finetuned-mrpc", save_strategy="epoch", push_to_hub=True
)

the Trainer will then upload your model to the Hub each time it is saved (here every epoch)

model.push_to_hub("dummy-model")
tokenizer.push_to_hub("dummy-model", organization="huggingface", use_auth_token="<TOKEN>")

2.1.2 Using the huggingface_hub Python library

create_repo("dummy-model", organization="huggingface")

2.1.3 Using the web interface

from huggingface_hub import upload_fileupload_file("<path_to_file>/config.json",path_in_repo="config.json",repo_id="<namespace>/dummy-model",
)

2.2 The Repository class
2.2.1 Repository class manages a local repository in a git-like manner
2.2.2 repo.git_pull()
repo.git_add()
repo.git_commit()
repo.git_push()
repo.git_tag()
2.3 save the model and tokenizer files to local directory
2.3.1 model.save_pretrained(“<path_to_dummy_folder>”)
2.3.2 tokenizer.save_pretrained(“<path_to_dummy_folder>”)
2.4 use git direct
2.4.1 git clone https://huggingface.co//
3 Building a model card
3.1 sections
3.1.1 Model description
3.1.2 Intended uses & limitations
3.1.3 How to use
3.1.4 Limitations and bias
3.1.5 Training data
3.1.6 Training procedure
3.1.7 Evaluation results
4 objects of the transformers library can be directly shared on the Hub
4.1 tokenizer
4.2 model configuration
4.3 model
4.4 Trainer

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com