Skip to content

Commit 447c8ed

Browse files
authored
update transformers version for replit-code-v1-3b, `internlm2-chat-… (#11811)
* update transformers version for `replit-code-v1-3b`, `internlm2-chat-7b` and mistral * remove for default transformers version
1 parent 2fbbb51 commit 447c8ed

File tree

3 files changed

+7
-10
lines changed

3 files changed

+7
-10
lines changed

python/llm/example/GPU/HuggingFace/LLM/internlm2/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ conda create -n llm python=3.11
1414
conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
17-
pip install transformers==3.36.2
17+
pip install transformers==3.38.0
18+
pip install einops
1819
pip install huggingface_hub
1920
```
2021

@@ -26,7 +27,8 @@ conda activate llm
2627

2728
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2829
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
29-
pip install transformers==3.36.2
30+
pip install transformers==3.38.0
31+
pip install einops
3032
pip install huggingface_hub
3133
```
3234

python/llm/example/GPU/HuggingFace/LLM/mistral/README.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ In this directory, you will find examples on how you could apply IPEX-LLM INT4 o
44
## Requirements
55
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
66

7-
**Important: According to [Mistral Troubleshooting](https://huggingface.co/mistralai/Mistral-7B-v0.1#troubleshooting), please make sure you have installed `transformers==4.34.0` to run the example.**
87

98
## Example: Predict Tokens using `generate()` API
109
In the example [generate.py](./generate.py), we show a basic use case for a Mistral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
@@ -16,9 +15,6 @@ conda create -n llm python=3.11
1615
conda activate llm
1716
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1817
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
19-
20-
# Refer to https://huggingface.co/mistralai/Mistral-7B-v0.1#troubleshooting, please make sure you are using a stable version of Transformers, 4.34.0 or newer.
21-
pip install transformers==4.34.0
2218
```
2319

2420
#### 1.2 Installation on Windows
@@ -29,9 +25,6 @@ conda activate llm
2925

3026
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
3127
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
32-
33-
# Refer to https://huggingface.co/mistralai/Mistral-7B-v0.1#troubleshooting, please make sure you are using a stable version of Transformers, 4.34.0 or newer.
34-
pip install transformers==4.34.0
3528
```
3629

3730
### 2. Configures OneAPI environment variables for Linux

python/llm/example/GPU/HuggingFace/LLM/replit/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ conda activate llm
1515
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1616
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1717

18-
pip install "transformers<4.35"
18+
pip install transformers<=4.33.3
1919
```
2020

2121
#### 1.2 Installation on Windows
@@ -26,6 +26,8 @@ conda activate llm
2626

2727
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2828
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
29+
30+
pip install transformers<=4.33.3
2931
```
3032

3133
### 2. Configures OneAPI environment variables for Linux

0 commit comments

Comments
 (0)