Skip to content
This repository was archived by the owner on Sep 4, 2025. It is now read-only.

Commit a77856f

Browse files
WoosukKwonprashantgupta24
authored andcommitted
[Bugfix] Fix pin_lora error in TPU executor (vllm-project#5760)
1 parent ec2ed1b commit a77856f

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

vllm/executor/tpu_executor.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,9 @@ def add_lora(self, lora_request: LoRARequest) -> bool:
8282
def remove_lora(self, lora_id: int) -> bool:
8383
raise NotImplementedError("LoRA is not implemented for TPU backend.")
8484

85+
def pin_lora(self, lora_id: int) -> bool:
86+
raise NotImplementedError("LoRA is not implemented for TPU backend.")
87+
8588
def list_loras(self) -> Set[int]:
8689
raise NotImplementedError("LoRA is not implemented for TPU backend.")
8790

0 commit comments

Comments
 (0)