模型:
ethzanalytics/dolly-v2-7b-sharded
这是databricks/dolly-v2-7b模型的分片检查点(带有大约4GB的分片)。有关详细信息,请参阅 original model 。
安装transformers、accelerate和bitsandbytes。
pip install -U -q transformers bitsandbytes accelerate
以8位形式加载模型,然后 run inference :
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/dolly-v2-7b-sharded"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name, load_in_8bit=True, device_map="auto",
)