Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,6 +276,7 @@ Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaM
| Baichuan | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan) | [link](python/llm/example/GPU/HuggingFace/LLM/baichuan) |
| Baichuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2) | [link](python/llm/example/GPU/HuggingFace/LLM/baichuan2) |
| InternLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm) | [link](python/llm/example/GPU/HuggingFace/LLM/internlm) |
| InternVL2 | | [link](python/llm/example/GPU/HuggingFace/Multimodal/internvl2) |
| Qwen | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen) | [link](python/llm/example/GPU/HuggingFace/LLM/qwen) |
| Qwen1.5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](python/llm/example/GPU/HuggingFace/LLM/qwen1.5) |
| Qwen2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen2) | [link](python/llm/example/GPU/HuggingFace/LLM/qwen2) |
Expand Down
1 change: 1 addition & 0 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,6 +276,7 @@ See the demo of running [*Text-Generation-WebUI*](https://ipex-llm.readthedocs.i
| Baichuan | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan) | [link](python/llm/example/GPU/HuggingFace/LLM/baichuan) |
| Baichuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2) | [link](python/llm/example/GPU/HuggingFace/LLM/baichuan2) |
| InternLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm) | [link](python/llm/example/GPU/HuggingFace/LLM/internlm) |
| InternVL2 | | [link](python/llm/example/GPU/HuggingFace/Multimodal/internvl2) |
| Qwen | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen) | [link](python/llm/example/GPU/HuggingFace/LLM/qwen) |
| Qwen1.5 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen1.5) | [link](python/llm/example/GPU/HuggingFace/LLM/qwen1.5) |
| Qwen2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen2) | [link](python/llm/example/GPU/HuggingFace/LLM/qwen2) |
Expand Down
101 changes: 101 additions & 0 deletions python/llm/example/GPU/HuggingFace/Multimodal/internvl2/chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#


import os
import time
import argparse
import requests
import torch
from PIL import Image
from ipex_llm.transformers import AutoModelForCausalLM
from transformers import AutoTokenizer, CLIPImageProcessor


if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Predict Tokens using `chat()` API for OpenGVLab/InternVL2-4B model')
parser.add_argument('--repo-id-or-model-path', type=str, default="OpenGVLab/InternVL2-4B",
help='The huggingface repo id for the OpenGVLab/InternVL2-4B model to be downloaded'
', or the path to the huggingface checkpoint folder')
parser.add_argument('--image-url-or-path', type=str,
default='https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg',
help='The URL or path to the image to infer')
parser.add_argument('--prompt', type=str, default="What is in the image?",
help='Prompt to infer')
parser.add_argument('--n-predict', type=int, default=64, help='Max tokens to predict')

args = parser.parse_args()
model_path = args.repo_id_or_model_path
image_path = args.image_url_or_path
n_predict = args.n_predict

# Load model in 4 bit,
# which convert the relevant layers in the model into INT4 format
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True,
load_in_low_bit="sym_int4",
modules_to_not_convert=["vision_model"])
model = model.half().to('xpu')
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
model.eval()

query = args.prompt
image_processor = CLIPImageProcessor.from_pretrained(model_path)

if os.path.exists(image_path):
image = Image.open(image_path).convert('RGB')
else:
image = Image.open(requests.get(image_path, stream=True).raw).convert('RGB')

pixel_values = image_processor(images=[image], return_tensors='pt').pixel_values
pixel_values = pixel_values.to('xpu')

question = "<image>" + query

generation_config = {
"max_new_tokens": n_predict,
"do_sample": False,
}

Copy link
Contributor

@rnwang04 rnwang04 Sep 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add with torch.inference_mode(): context manager for inference

with torch.inference_mode():
# ipex_llm model needs a warmup, then inference time can be accurate
model.chat(
pixel_values=None,
question=question,
generation_config=generation_config,
tokenizer=tokenizer,
)

st = time.time()
res = model.chat(
tokenizer=tokenizer,
pixel_values=pixel_values,
question=question,
generation_config=generation_config,
history=[]
)
torch.xpu.synchronize()
end = time.time()

print(f'Inference time: {end-st} s')
print('-'*20, 'Input Image', '-'*20)
print(image_path)
print('-'*20, 'Input Prompt', '-'*20)
print(args.prompt)
print('-'*20, 'Chat Output', '-'*20)
print(res)
137 changes: 137 additions & 0 deletions python/llm/example/GPU/HuggingFace/Multimodal/internvl2/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
# InternVL2
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on InternVL2 model on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [OpenGVLab/InternVL2-4B](https://huggingface.co/OpenGVLab/InternVL2-4B) as a reference InternVL2 model.

## 0. Requirements
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.

## Example: Predict Tokens using `chat()` API
In the example [chat.py](./chat.py), we show a basic use case for an InternVL2-4B model to predict the next N tokens using `chat()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
### 1. Install
#### 1.1 Installation on Linux
We suggest using conda to manage environment:
```bash
conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

pip install einops timm

```

#### 1.2 Installation on Windows
We suggest using conda to manage environment:
```bash
conda create -n llm python=3.11 libuv
conda activate llm

# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

pip install einops timm

```

### 2. Configures OneAPI environment variables for Linux

> [!NOTE]
> Skip this step if you are running on Windows.

This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.

```bash
source /opt/intel/oneapi/setvars.sh
```

### 3. Runtime Configurations
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
#### 3.1 Configurations for Linux
<details>

<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>

```bash
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
```

</details>

<details>

<summary>For Intel Data Center GPU Max Series</summary>

```bash
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
export ENABLE_SDP_FUSION=1
```
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
</details>

<details>

<summary>For Intel iGPU</summary>

```bash
export SYCL_CACHE_PERSISTENT=1
export BIGDL_LLM_XMX_DISABLED=1
```

</details>

#### 3.2 Configurations for Windows
<details>

<summary>For Intel iGPU</summary>

```cmd
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
```

</details>

<details>

<summary>For Intel Arc™ A-Series Graphics</summary>

```cmd
set SYCL_CACHE_PERSISTENT=1
```

</details>

> [!NOTE]
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
### 4. Running examples

- chat with specified prompt:
```
python ./chat.py --prompt 'What is in the image?'
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the InternVL2 (e.g. `OpenGVLab/InternVL2-4B`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'OpenGVLab/InternVL2-4B'`.
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg'`.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `64`.

#### Sample Output

#### [OpenGVLab/InternVL2-4B](https://huggingface.co/OpenGVLab/InternVL2-4B)

```log
-------------------- Input Image --------------------
https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg
-------------------- Input Prompt --------------------
What is in the image?
-------------------- Chat Output --------------------
The image shows a tiger lying on the grass.
```

The sample input image is:

<a href="https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg"><img width=400px src="https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg" ></a>
Loading