Skip to content

Commit 9a73475

Browse files
DarkLight1337rasmith
authored andcommitted
[Doc] [1/N] Initial guide for merged multi-modal processor (vllm-project#11925)
Signed-off-by: DarkLight1337 <[email protected]>
1 parent 6b294a0 commit 9a73475

File tree

19 files changed

+403
-168
lines changed

19 files changed

+403
-168
lines changed

docs/requirements-docs.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ sphinx-book-theme==1.0.1
33
sphinx-copybutton==0.5.2
44
myst-parser==3.0.1
55
sphinx-argparse==0.4.0
6+
sphinx-design==0.6.1
67
sphinx-togglebutton==0.3.2
78
msgspec
89
cloudpickle

docs/source/api/multimodal/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ vLLM provides experimental support for multi-modal models through the {mod}`vllm
77
Multi-modal inputs can be passed alongside text and token prompts to [supported models](#supported-mm-models)
88
via the `multi_modal_data` field in {class}`vllm.inputs.PromptType`.
99

10-
Looking to add your own multi-modal model? Please follow the instructions listed [here](#enabling-multimodal-inputs).
10+
Looking to add your own multi-modal model? Please follow the instructions listed [here](#supports-multimodal).
1111

1212
## Module Contents
1313

docs/source/api/multimodal/inputs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## User-facing inputs
44

55
```{eval-rst}
6-
.. autodata:: vllm.multimodal.MultiModalDataDict
6+
.. autodata:: vllm.multimodal.inputs.MultiModalDataDict
77
```
88

99
## Internal data structures

docs/source/conf.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@
4343
"sphinx.ext.autosummary",
4444
"myst_parser",
4545
"sphinxarg.ext",
46+
"sphinx_design",
4647
"sphinx_togglebutton",
4748
]
4849
myst_enable_extensions = [

docs/source/contributing/model/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Adding a New Model
44

5-
This section provides more information on how to integrate a [HuggingFace Transformers](https://github.com/huggingface/transformers) model into vLLM.
5+
This section provides more information on how to integrate a [PyTorch](https://pytorch.org/) model into vLLM.
66

77
```{toctree}
88
:caption: Contents

0 commit comments

Comments
 (0)