Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
241 changes: 237 additions & 4 deletions docs/pretrained.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _pretrained-info-page:

=================================
Pre-trained Neural Network Models
Pretrained Neural Network Models
=================================

^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -29,8 +29,62 @@ They share the same input output configuration defined below:

.. collapse:: Model names

- googlenet-kather100k
- alexnet-kather100k
- resnet18-kather100k
- resnet34-kather100k
- resnet50-kather100k
- resnet101-kather100k
- resnext50_32x4d-kather100k
- resnext101_32x8d-kather100k
- wide_resnet50_2-kather100k
- wide_resnet101_2-kather100k
- densenet121-kather100k
- densenet161-kather100k
- densenet169-kather100k
- densenet201-kather100k
- mobilenet_v2-kather100k
- mobilenet_v3_large-kather100k
- mobilenet_v3_small-kather100k
- googlenet-kather100k

-----------------------------
Patch Camelyon (PCam) Dataset
-----------------------------

The following models are trained using the `PCam dataset <https://github.com/basveeling/pcam/>`_.
They share the same input output configuration defined below:

.. collapse:: Input Output Configuration Details

.. code-block:: python

from tiatoolbox.models import IOPatchPredictorConfig
ioconfig = IOPatchPredictorConfig(
patch_input_shape=(96, 96),
stride_shape=(96, 96),
input_resolutions=[{"resolution": 1.0, "units": "mpp"}]
)


.. collapse:: Model names

- alexnet-pcam
- resnet18-pcam
- resnet34-pcam
- resnet50-pcam
- resnet101-pcam
- resnext50-pcam
- resnext101-pcam
- wide_resnet50_2-pcam
- wide_resnet101_2-pcam
- densenet121-pcam
- densenet161-pcam
- densenet169-pcam
- densenet201-pcam
- mobilenet_v2-pcam
- mobilenet_v3_large-pcam
- mobilenet_v3_small-pcam
- googlenet-pcam


^^^^^^^^^^^^^^^^^^^^^^
Expand All @@ -41,7 +95,7 @@ Semantic Segmentation
Tissue Masking
--------------------

The following models are trained using internal data of TIACentre.
The following models are trained using internal data of TIA Centre.
They share the same input output configuration defined below:

.. collapse:: Input Output Configuration Details
Expand Down Expand Up @@ -72,7 +126,7 @@ They share the same input output configuration defined below:
Breast Cancer
--------------------

The following models are trained using `BCSS dataset <https://bcsegmentation.grand-challenge.org/>`_.
The following models are trained using the `BCSS dataset <https://bcsegmentation.grand-challenge.org/>`_.
They share the same input output configuration defined below:

.. collapse:: Input Output Configuration Details
Expand All @@ -98,3 +152,182 @@ They share the same input output configuration defined below:

- fcn_resnet50_unet-bcss


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nucleus Instance Segmentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--------------------
PanNuke Dataset
--------------------

We provide the following models trained using the `PanNuke dataset <https://warwick.ac.uk/fac/cross_fac/tia/data/pannuke>`_, which uses the following
input output configuration:

.. collapse:: Input Output Configuration Details

.. code-block:: python

from tiatoolbox.models import IOSegmentorConfig
ioconfig = IOSegmentorConfig(
input_resolutions=[
{'units': 'mpp', 'resolution': 0.25}
],
output_resolutions=[
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25}
],
margin=128
tile_shape=[1024, 1024]
patch_input_shape=(256, 256),
patch_output_shape=(164, 164),
stride_shape=(164, 164),
save_resolution={'units': 'mpp', 'resolution': 0.25}
)

.. collapse:: Model names

- hovernet_fast-pannuke


--------------------
MoNuSAC Dataset
--------------------

We provide the following models trained using the `MoNuSAC dataset <https://monusac.grand-challenge.org/>`_, which uses the following
input output configuration:

.. collapse:: Input Output Configuration Details

.. code-block:: python

from tiatoolbox.models import IOSegmentorConfig
ioconfig = IOSegmentorConfig(
input_resolutions=[
{'units': 'mpp', 'resolution': 0.25}
],
output_resolutions=[
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25}
],
margin=128
tile_shape=[1024, 1024]
patch_input_shape=(256, 256),
patch_output_shape=(164, 164),
stride_shape=(164, 164),
save_resolution={'units': 'mpp', 'resolution': 0.25}
)

.. collapse:: Model names

- hovernet_fast-monusac


--------------------
CoNSeP Dataset
--------------------

We provide the following models trained using the `CoNSeP dataset <https://warwick.ac.uk/fac/cross_fac/tia/data/hovernet/>`_, which uses the following
input output configuration:

.. collapse:: Input Output Configuration Details

.. code-block:: python

from tiatoolbox.models import IOSegmentorConfig
ioconfig = IOSegmentorConfig(
input_resolutions=[
{'units': 'mpp', 'resolution': 0.25}
],
output_resolutions=[
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25}
],
margin=128
tile_shape=[1024, 1024]
patch_input_shape=(270, 270),
patch_output_shape=(80, 80),
stride_shape=(80, 80),
save_resolution={'units': 'mpp', 'resolution': 0.25}
)

.. collapse:: Model names

- hovernet_original-consep


--------------------
Kumar Dataset
--------------------

We provide the following models trained using the `Kumar dataset <https://monuseg.grand-challenge.org/>`_, which uses the following
input output configuration:

.. collapse:: Input Output Configuration Details

.. code-block:: python

from tiatoolbox.models import IOSegmentorConfig
ioconfig = IOSegmentorConfig(
input_resolutions=[
{'units': 'mpp', 'resolution': 0.25}
],
output_resolutions=[
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25},
{'units': 'mpp', 'resolution': 0.25}
],
margin=128
tile_shape=[1024, 1024]
patch_input_shape=(270, 270),
patch_output_shape=(80, 80),
stride_shape=(80, 80),
save_resolution={'units': 'mpp', 'resolution': 0.25}
)

.. collapse:: Model names

- hovernet_original_kumar


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Multi-Task Segmentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

----------------------------------------
Oral Epithelial Dysplasia (OED) Dataset
---------------------------------------

We provide the following model trained using a private OED dataset. The model outputs nuclear instance segmentation
and classification results, as well as semantic segmentation of epithelial layers. The model uses the following
input output configuration:

.. collapse:: Input Output Configuration Details

.. code-block:: python

from tiatoolbox.models import IOSegmentorConfig
ioconfig = IOSegmentorConfig(
input_resolutions=[
{'units': 'mpp', 'resolution': 0.5}
],
output_resolutions=[
{'units': 'mpp', 'resolution': 0.5},
{'units': 'mpp', 'resolution': 0.5},
{'units': 'mpp', 'resolution': 0.5},
{'units': 'mpp', 'resolution': 0.5}
],
margin=128
tile_shape=[1024, 1024]
patch_input_shape=(256, 256),
patch_output_shape=(164, 164),
stride_shape=(164, 164),
save_resolution={'units': 'mpp', 'resolution': 0.5}
)

.. collapse:: Model names

- hovernetplus-oed
3 changes: 2 additions & 1 deletion tiatoolbox/models/engine/nucleus_instance_segmentor.py
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,8 @@ class NucleusInstanceSegmentor(SemanticSegmentor):
weights already loaded. Default is `None`. If provided,
`pretrained_model` argument is ignored.
pretrained_model (str): Name of the existing models support by tiatoolbox
for processing the data. Refer to [URL] for details.
for processing the data. For a full list of pretrained models, refer to the
`docs <https://tia-toolbox.readthedocs.io/en/latest/pretrained.html>`_.
By default, the corresponding pretrained weights will also be
downloaded. However, you can override with your own set of weights
via the `pretrained_weights` argument. Argument is case insensitive.
Expand Down
6 changes: 4 additions & 2 deletions tiatoolbox/models/engine/patch_predictor.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ class PatchPredictor:
>>> predictor = PatchPredictor(
... pretrained_model="resnet18-kather100k",
... pretrained_weights="resnet18_local_weight")
For a full list of pretrained models, refer to the
`docs <https://tia-toolbox.readthedocs.io/en/latest/pretrained.html>`_
batch_size (int) : Number of images fed into the model each time.
num_loader_workers (int) : Number of workers to load the data.
Take note that they will also perform preprocessing.
Expand All @@ -91,8 +93,8 @@ class PatchPredictor:
or `wsi`.
model (nn.Module): Defined PyTorch model.
pretrained_model (str): Name of the existing models support by tiatoolbox
for processing the data. Refer to
`tiatoolbox.models.classification.get_pretrained_model` for details.
for processing the data. For a full list of pretrained models, refer to the
`docs <https://tia-toolbox.readthedocs.io/en/latest/pretrained.html>`_
By default, the corresponding pretrained weights will also be
downloaded. However, you can override with your own set of weights
via the `pretrained_weights` argument. Argument is case insensitive.
Expand Down
3 changes: 2 additions & 1 deletion tiatoolbox/models/engine/semantic_segmentor.py
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,8 @@ class SemanticSegmentor:
weights already loaded. Default is `None`. If provided,
`pretrained_model` argument is ignored.
pretrained_model (str): Name of the existing models support by tiatoolbox
for processing the data. Refer to [URL] for details.
for processing the data. For a full list of pretrained models, refer to the
`docs <https://tia-toolbox.readthedocs.io/en/latest/pretrained.html>`_.
By default, the corresponding pretrained weights will also be
downloaded. However, you can override with your own set of weights
via the `pretrained_weights` argument. Argument is case insensitive.
Expand Down