Skip to content

Conversation

@mostafajahanifar
Copy link
Collaborator

@mostafajahanifar mostafajahanifar commented Oct 21, 2021

A new example notebook is added based on the semantic segmentation functionality. In this notebook I tried to show two things:
1- How easy it is to to use pretrained segmentation models
2- How user can use his/her own segmentation model in the process.

I tried to explain all the concepts as simple as possible with a nice coherency.

TODO:

  • Need to be checked on a local machine.
  • Need to select a small breast cancer sample for WSI prediction part.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 22, 2021

View / edit / reply to this conversation on ReviewNB

shaneahmed commented on 2021-10-22T09:19:28Z
----------------------------------------------------------------

Please uncomment the cell as you have already said it is optional.


DavidBAEpstein commented on 2021-10-22T15:04:55Z
----------------------------------------------------------------

The output cell should be shown only if the code cell is not commented out, otherwise the output may confuse users..

Another rather different point (which should be attended to in another PR, not in this PR) is that some of the notebooks, maybe this one, turn out to need a slightly different requirements file when running on a Mac. Srijay helped me look into this. And a Mac with a GPU will again be different, I think. John P told me he would look into this. But he is doing so many different things ....I will probably do some tests when I have time, and raise this as an issue.

mostafajahanifar commented on 2021-10-25T12:13:49Z
----------------------------------------------------------------

Done.

Copy link
Member

@shaneahmed shaneahmed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mostafajahanifar . It looks good to me.
About adding new models, let us discuss this in the meeting. I have some ideas.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 22, 2021

View / edit / reply to this conversation on ReviewNB

vqdang commented on 2021-10-22T09:34:15Z
----------------------------------------------------------------

Should have just refactor these out into a def .


DavidBAEpstein commented on 2021-10-22T15:18:17Z
----------------------------------------------------------------

should it be a def in a .py module, because it would be of use in other contexts.?

mostafajahanifar commented on 2021-10-25T12:43:50Z
----------------------------------------------------------------

Thanks for the suggestion. Done

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 22, 2021

View / edit / reply to this conversation on ReviewNB

vqdang commented on 2021-10-22T09:34:16Z
----------------------------------------------------------------

Do not link the yaml file. We should link to readthedoc page here. But because we are still refactoring the doc, its better to redirect to the model page, or simply directly mentioning the names here.


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 22, 2021

View / edit / reply to this conversation on ReviewNB

vqdang commented on 2021-10-22T09:34:16Z
----------------------------------------------------------------

Title name for raw image subplot above and legend for color code in below image


mostafajahanifar commented on 2021-10-25T14:40:50Z
----------------------------------------------------------------

I didn't use the overlay generation function here with purpose. I didn't want to make the visualization bit complex here and also wanted to show the processed predictions in solid colours. However, I have added title, class title to the images in the revised version and also explain in the text that which colour in the processed prediction responds to which class.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 22, 2021

View / edit / reply to this conversation on ReviewNB

vqdang commented on 2021-10-22T09:34:17Z
----------------------------------------------------------------

> The process is quite similar to what we have done for tiles, we don't even need to instantiate a new SemanticSegmentor.

You are instantiating below


mostafajahanifar commented on 2021-10-25T14:41:07Z
----------------------------------------------------------------

Corrected :-D

@shaneahmed shaneahmed added the documentation Improvements or additions to documentation label Oct 22, 2021
Copy link
Collaborator

The output cell should be shown only if the code cell is not commented out.

Another rather different point (which should be attended to in another PR, not in this PR) is that some of the notebooks, maybe this one, turn out to need a slightly different requirements file when running on a Mac. Srijay helped me look into this. And a Mac with a GPU will again be different, I think. John P told me he would look into this. But he is doing so many different things ....I will probably do some tests when I have time, and raise this as an issue.


View entire conversation on ReviewNB

Copy link
Collaborator

Somehow I managed to delete Shan's comment. Bad marks to ReviewNB for allowing this.

I only typed my comments once, so why does ReviewNB repeat them and put the words into Shan's mouth. More bad marks to RNB.


View entire conversation on ReviewNB

Copy link
Collaborator

should it be a def in a .py module, because it would be of use in other contexts.?


View entire conversation on ReviewNB

Copy link
Collaborator Author

David,

Shan's comment is still here. Also, your comment has been shown under your name. It's odd how ReviewNB shows comments in a bad way for you.

Anyway, both Shan's comment and yours are valid. I will correct the notebook accordingly.


View entire conversation on ReviewNB

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:15Z
----------------------------------------------------------------

Many users will not know the meaning of "semantic segmentation". Please find a convenient place to give a brief explanation and/or a link to an explanation.


mostafajahanifar commented on 2021-10-25T12:13:30Z
----------------------------------------------------------------

Thanks for mentioning this David. I have added extra explanation and references on the semantic segmentation task.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:15Z
----------------------------------------------------------------

the model we are going to demonstrate can predict the probability

-> the model we demonstrate can estimate the probability

"on the mode"?? Should this be "on the model"? If so, the sentence should be rephrased:

"More information on the model and the dataset used for training can be found"

However, if you really mean "mode", then very substantial rephrasing is needed to make sense of what is written


@review-notebook-app
Copy link

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:16Z
----------------------------------------------------------------

I looked for the misspelling of "output" by running a Unix find command: the numbers on the left are line numbers. Many of these are comments, not affecting the notebook under review, but it would be a good idea to change the comments as well. I think that one of five hits is code and four are comments.

464:       about input and ouput placement of patches. Check

945:       about input and ouput placement of patches. When provided,

1092:        logging.info(f"--Ouput: {wsi_save_path}")

1270:       about input and ouput placement of patches. When provided,

./tiatoolbox/models/controller/semantic_segmentor.py

170:      output (dict): Ouput generated by the model.

./tiatoolbox/models/controller/patch_predictor.py


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:17Z
----------------------------------------------------------------

"There are many parameters associated with SemanticSegmentor which we will explain their use throughout this notebook." should be rephrased. I suggest

"There are many parameters associated with SemanticSegmentor.We explain these as we meet them, while proceeding through the notebook."

Here we just explain the ones -> Here we explain only the ones

our library of pretrained models on various segmentation tasks -> our library of models pretrained on various segmentation tasks

are using -> use

Here is an explanation from Google of one aspect of the way I am correcting your English:

We use the present simple tense (like "we use") when we want to talk about fixed habits or routines – things that don't change. We use the present continuous to talk about actions which are happening (like "we are using") at the present moment, but will soon finish. Note the use of present continuous in "I am correcting" in the first line of this paragraph.

Change :If you are using GPU for the computations, you should be careful not to set the batch_size bigger than the amount that the GPU memory limit allows." to

"If you use a GPU, be careful not to set thebatch_size larger than the GPU memory limit would allow."

function, automatically process all the images in the input list and save the results->

function automatically processes all the images on the input list and saves the results

(Note: no comma)

to properly use the predict method -> to use thepredictmethod properly

Because plain images only have baseline layer, units="baseline" and resolution=1.0 options should be always kept like this when working in 'tile' mode (these are function's default values, you can ignore setting them at all). ->

Because plain images only have a baseline layer, the options in this case should always be the defaultsunits="baseline" andresolution=1.0and, as defaults, can be omitted.

we know that the size out output maps from

I don't know what this means

Also, how would the user know that the output size is half of the input size?

On Google, I found https://medium.datadriveninvestor.com/review-u-net-resnet-the-importance-of-long-short-skip-connections-biomedical-image-ccbf8061ff43

This shows an input size of 512x512 and output of the same size.

Note that, if 512 goes to 256, the ratio in size would not be 2 but 4.

I couldn't find any information on the internet about fcn_resnet50_unet-bcss

If this is a name special to tiatoolbox, then my question above "how would the user know?" becomes even more relevant.

during processing a WSI, otherwise, the loop -> during processing a WSI. Otherwise, the loop 

make sure that the prediction is working -> make sure that prediction is working

returns a list of the paths to its inputs and the processed outputs saved on the disk ->

returns a list of the paths to its inputs and to the processed outputs saved on the disk

Which can be used for loading the results in case you want to process or visualize them. ->

This can be used for loading the results for processing and visualisation.


mostafajahanifar commented on 2021-10-25T13:17:05Z
----------------------------------------------------------------

we know that the size out output maps from

Following the comments raised by you, I have changed the whole part in the revised notebook.

Also, how would the user know that the output size is half of the input size?

I have tried to make it more clear, but the answer is what the sentence says. This is the knowledge of user of the model he/she is using. If the user is using Tiatoolbox pretrained models, these parameters have been set by default. However, if users want to use their own models, of course they should know what is the desired input and output shape for their model and they can set it here accordingly. Here I present the  fcn_resnet50_unet-bcss as an example, which is developed by Tiatoolbox team, hence we know what is the specification of this model and can set it accordingly.

Note that, if 512 goes to 256, the ratio in size would not be 2 but 4.Note that, if 512 goes to 256, the ratio in size would not be 2 but 4

I have changed the size notion to shape (which is actually the right term here) to avoid confusion. Thanks for pointing these out.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:18Z
----------------------------------------------------------------

we used a very simple -> we use a very simple

that finds for each pixel the class index that has the highest value among all. ->

that finds, for each pixel, the class index that has the highest estimated probability.


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:19Z
----------------------------------------------------------------

if there is not masks input -> if there is no masks input

regions segmentations -> region segmentations

When a noun is used as an adjective, the default is to use the singular form.

"There are many car parks in Coventry" NOT "cars parks". There are many exceptions to this rule.

function. masks argument similar to the imgs should be a list of paths to the desired image masks. ->

function: the masks argument is a list of paths to the desired image masks and theimgsargument

I use the colon to avoid beginning a sentence with a lower case letter.


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:19Z
----------------------------------------------------------------

the same way we did for tiles (taking argmax of model output), ->

the same way as we did for tiles (taking argmax of the model output),

and extracted its overview -> and extract its overview

the overview has been extracted ->. the overview is extracted

as the prediction output has been saved in -> as the saved prediction output

which was accessed -> accessed

we used overlay_patch_prediction helper function ->

we used the overlay_patch_prediction helper function

related to model’s input/output when you are selecting to work ->

related to a model’s input/output when you decide to work

pretrained models (they will be set automatically based on their optimal values) and here we just show the parameters to explain how they work. ->

pretrained models. (They will be set automatically, based on their optimal values.) Here we explain how the parameters work, so we need to show them explicitly.

can be done as easy as: -> can be done as easily as:

of this example notebook -> of this notebook


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:20Z
----------------------------------------------------------------

Although we at TIACentre are trying to extend the number of pretrained models in the toolbox every day to cover more tasks and tissue types, users can use their own models in the Tiatoolbox inference pipeline. ->

At the TIACentre we are extending the number of pretrained models in the toolbox as fast as we can, to cover more tasks and tissue types. Nevertheless, users may need to use their own models in the Tiatoolbox inference pipeline.

It means that a researcher in computational pathology does not need to worry about programing for WSI processing, patch extraction, prediction aggregation, and multi-processing handling. ->

In this case,TIAtoolbox removes the burden of programing WSI processing, patch extraction, prediction aggregation, and multi-processing handling.

Things even get more complicated when you want to do projects at scale. Here comes Tiatoolbox to rescue! ->

Projects at scale provide further complications. But TIAtoolbox comes to the rescue!

the Tiatoolbox inference pipeline and we will show you how you can do that. ->

theTIAtoolboxinference pipeline. We show you how.

Here we use a generic UNet architecture ->

As an illustration of the technieque, we use a generic UNet architecture

implemented in the Tiatoolbox and load our pretrained weight more tissue segmentation into it (wights are already downloaded in the "Download" section). ->

implemented in the Tiatoolbox. The section "Downloading the required files" above describes downloading the weights pretrained for segmentation, and these are loaded into the model.


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:21Z
----------------------------------------------------------------

Remember this is just an example CNN model, you can use any model of your choice, just remember that in order to use that model in the SemanticSegmentor, model weight should be already loaded because when you pass a model argument to the SemanticSegmentor, it ignores the pretrained_model and pretrained_weights arguments (remember not to confuse that pretrained_weights is associated with pretrained_model not model). ->

This is just an example, and you can use any CNN model of your choice. Remember that, in order to use SemanticSegmentor, model weights should already be loaded: when you pass a model argument to SemanticSegmentor, which ignores the pretrained_model and pretrained_weights arguments. The argument pretrained_weights is associated with the argument pretrained_model and not with with the argument model

@mostafa please check the preceding sentence carefully. It is a confusing situation and a confusing sentence. What you say, or rather my interpretation of what you say is clearly contradicted by the pseudo-code you present below.

Very well, not that we have our model in place, let's create our segment. ->

Now that we have our model in place, let's create our segment.???

I don't know what this means.

Perhaps we are creating a segmentation. A single gland could be called a "segment", though this would be unusual. Several glands could NOT be called a segment. I Googled "segment deep learning". There were many hits. They all mentioned "segmentation", but I never saw the word "segment".


mostafajahanifar commented on 2021-10-25T15:20:25Z
----------------------------------------------------------------

Thanks David for pointing out this confusion. I have revised the notebook based on your suggestion and tried to clear the ambiguity.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:21Z
----------------------------------------------------------------

Parameters of IOSegmentorConfig are quite self-explanatory, but let's have look at their definition: ->Parameters of IOSegmentorConfig have self-explanatory names, but let's have look at their definition:

input_resolutions: specifies the resolution of each input head of model in the form of a dictionary must be in the same order as target model.forward().

This is a bit strange. The order of entries in a python dictionary cannot be counted on. So what does this mean? Are there potentially many dictionaries? It is unclear what "more than one input/output" means. I can imagine several input formats, with only one output format, and several output formats, with only one input. The number of inputs need not equal the number of outputs. A particular input does not need to be associated with a particular output. Possibly you mean that the values associated to a key are in an ordered list. Consider saying something about the keys and values, to make this clearer.

There is confusion between the usage of "resolution" in microscopy and in TV/electronics. If the optical resolution is large, then details are not observable, only gross characteristics. Unfortunately, TIAtoolbox has chosen terminology that does not help. So we need to be specially careful to avoid this ambiguity. Using two sentences, rather than one, just makes matters worse.

Below, I give my suggestion. But you may prefer some other form of words.

Commonly, we desire the resolution (size) of the output segmentation map to be smaller than the input map. For example, we don't need the tissue segmentation results to be as high-resolution as input WSI. Therefore, this parameter can be set based on the model, task, and desired output size. ->

It is only to be expected that less detail is available from the output than from the input. The setting of this parameter will depend on the model and task, and on how much detail is desired.

on how many input images we like -> on as many input images as we like


mostafajahanifar commented on 2021-10-25T15:32:49Z
----------------------------------------------------------------

The order of entries in a python dictionary cannot be counted on. So what does this mean? 

You are right. However, the input_resuolution is a list of dictionaries! In lists, items can be counted. I have reformed the explanation here to clear confusion.

 It is unclear what "more than one input/output" means. I can imagine several input formats, with only one output format, and several output formats, with only one input. The number of inputs need not equal the number of outputs. A particular input does not need to be associated with a particular output. Possibly you mean that the values associated to a key are in an ordered list. Consider saying something about the keys and values, to make this clearer.

We can have model with multiple input and multiple outputs. For example, consider a model that accepts images in 20x and 5x as input and in the output returns gland mask and gland boundaries as two different outputs. As I mentioned earlier, the parameter input_resuolution is a list of dictionaries whose items refer to different inputs of the network. Anyway, I have changed the text to avoid confusion. Thanks for pointing this out.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:22Z
----------------------------------------------------------------

Let's see how our model worked after all:

Let's see how well our model worked:


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:23Z
----------------------------------------------------------------

To once again see how easy it is to use an external model in Tiatoolbox semantic segmentation class to do inference on a set of images/WSIs, let summarize what we have been saying so far in a pseudo-code, as below: ->

To once again see how easy it is to use an external model in Tiatoolbox's semantic segmentation class, we summarize in pseudo-code, as below:


@review-notebook-app
Copy link

review-notebook-app bot commented Oct 24, 2021

View / edit / reply to this conversation on ReviewNB

DavidBAEpstein commented on 2021-10-24T16:34:23Z
----------------------------------------------------------------

Feel free to play around with the parameters, models, and testing new images in this example. Currently, we are extending our collection of pre-trained models. To keep a track of them, make sure to follow our releases and here. We welcome any trained model in computational pathology (in any task) to be added to the Tiatoolbox. So if you have such a model (in Pytorch) and want to contribute, please contact us or simply create a PR on our Github page.

->

Feel free to play around with the parameters, models, and experimenting with new images. Currently, we are extending our collection of pre-trained models. To keep a track of them, make sure to follow our releases. You can also check here. We welcome any trained model in computational pathology (in any task) for addition to Tiatoolbox. If you have such a model (in Pytorch) and want to contribute, please contact us or simply create a PR on our Github page.


Copy link
Collaborator

@DavidBAEpstein DavidBAEpstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have made many comments (always on the text cells, never on the code cells) on ReviewNB and I have "submitted" these comments via ReviewNB. However, Github does not seem to know anything about this submission. `i'm not happy about that.
I have not yet tried to run the notebook. I would like to try this on various platforms, for example Colab with GPU, Colab without GPU, same for Kaggle, on my personal Mac, on Jalapeno. However this will take time, so I have submitted my comments on the text cells now, in order to save time. These comments can be addressed or objected to while I find time to run the notebooks on various platforms.

Copy link
Collaborator Author

Thanks for mentioning this David. I have added extra explanation and references on the semantic segmentation task.


View entire conversation on ReviewNB

Copy link
Collaborator Author

Done.


View entire conversation on ReviewNB

Copy link
Collaborator Author

Thanks for the suggestion. Done


View entire conversation on ReviewNB

Copy link
Collaborator Author

we know that the size out output maps from

Following the comments raised by you, I have changed the whole part in the revised notebook.

Also, how would the user know that the output size is half of the input size?

I have tried to make it more clear, but the answer is what the sentence says. This is the knowledge of user of the model he/she is using. If the user is using Tiatoolbox pretrained models, these parameters have been set by default. However, if users want to use their own models, of course they should know what is the desired input and output shape for their model and they can set it here accordingly. Here I present the  fcn_resnet50_unet-bcss as an example, which is developed by Tiatoolbox team, hence we know what is the specification of this model and can set it accordingly.

Note that, if 512 goes to 256, the ratio in size would not be 2 but 4.Note that, if 512 goes to 256, the ratio in size would not be 2 but 4

I have changed the size notion to shape (which is actually the right term here) to avoid confusion. Thanks for pointing these out.

View entire conversation on ReviewNB

Copy link
Collaborator Author

I didn't use the overlay generation function here with purpose. I didn't want to make the visualization bit complex here and also wanted to show the processed predictions in solid colours. However, I have added title, class title to the images in the revised version and also explain in the text that which colour in the processed prediction responds to which class.


View entire conversation on ReviewNB

Copy link
Collaborator Author

Corrected :-D


View entire conversation on ReviewNB

Copy link
Collaborator Author

Thanks David for pointing out this confusion. I have revised the notebook based on your suggestion and tried to clear the ambiguity.


View entire conversation on ReviewNB

Copy link
Collaborator Author

The order of entries in a python dictionary cannot be counted on. So what does this mean? 

You are right. However, the input_resuolution is a list of dictionaries! In lists, items can be counted. I have reformed the explanation here to clear confusion.

 It is unclear what "more than one input/output" means. I can imagine several input formats, with only one output format, and several output formats, with only one input. The number of inputs need not equal the number of outputs. A particular input does not need to be associated with a particular output. Possibly you mean that the values associated to a key are in an ordered list. Consider saying something about the keys and values, to make this clearer.

We can have model with multiple input and multiple outputs. For example, consider a model that accepts images in 20x and 5x as input and in the output returns gland mask and gland boundaries as two different outputs. As I mentioned earlier, the parameter input_resuolution is a list of dictionaries whose items refer to different inputs of the network. Anyway, I have changed the text to avoid confusion. Thanks for pointing this out.


View entire conversation on ReviewNB

@mostafajahanifar
Copy link
Collaborator Author

mostafajahanifar commented Oct 25, 2021

I have made many comments (always on the text cells, never on the code cells) on ReviewNB and I have "submitted" these comments via ReviewNB. However, Github does not seem to know anything about this submission. `i'm not happy about that. I have not yet tried to run the notebook. I would like to try this on various platforms, for example Colab with GPU, Colab without GPU, same for Kaggle, on my personal Mac, on Jalapeno. However this will take time, so I have submitted my comments on the text cells now, in order to save time. These comments can be addressed or objected to while I find time to run the notebooks on various platforms.

Thanks @DavidBAEpstein for the thorough review and well-pointed comments. I have tried to include all your comments in the revised notebook. That helped improving the notebook quality a lot, I appreciate it.
Re testing the notebook, it would be fantastic if you can test it, although you might face trouble making it work on Mac. I just want to let you know that the notebook has already been tested on Colab, with and without GPU.

@review-notebook-app
Copy link

review-notebook-app bot commented Oct 25, 2021

View / edit / reply to this conversation on ReviewNB

shaneahmed commented on 2021-10-25T16:52:45Z
----------------------------------------------------------------

we do have download_data in utils.misc. Can we not use that?


mostafajahanifar commented on 2021-10-25T17:49:15Z
----------------------------------------------------------------

good call, thanks. I will rewrite this section.

Copy link
Collaborator Author

good call, thanks. I will rewrite this section.


View entire conversation on ReviewNB

@codecov
Copy link

codecov bot commented Oct 25, 2021

Codecov Report

Merging #167 (0b006b0) into develop (0a64d15) will not change coverage.
The diff coverage is n/a.

❗ Current head 0b006b0 differs from pull request most recent head 57cc9ee. Consider uploading reports for the commit 57cc9ee to get more accurate results
Impacted file tree graph

@@           Coverage Diff            @@
##           develop     #167   +/-   ##
========================================
  Coverage    99.79%   99.79%           
========================================
  Files           40       40           
  Lines         2955     2955           
  Branches       492      492           
========================================
  Hits          2949     2949           
  Misses           1        1           
  Partials         5        5           
Impacted Files Coverage Δ
tiatoolbox/models/architecture/unet.py 100.00% <0.00%> (ø)
tiatoolbox/models/architecture/utils.py 100.00% <0.00%> (ø)
tiatoolbox/models/controller/semantic_segmentor.py 100.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0a64d15...57cc9ee. Read the comment docs.

Copy link
Member

@shaneahmed shaneahmed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shaneahmed shaneahmed merged commit 4542f1b into develop Oct 27, 2021
@shaneahmed shaneahmed deleted the example-semantic branch October 29, 2021 11:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants