Skip to content

Commit 0087e4a

Browse files
ProGamerGovfacebook-github-bot
authored andcommitted
Fix grammatical mistakes in README (#542)
Summary: I pasted the README into a Google Doc to see if the spelling and grammar check would find anything. It was able to find a few mistakes that I've now corrected. Pull Request resolved: #542 Reviewed By: vivekmig Differential Revision: D25221538 Pulled By: NarineK fbshipit-source-id: c53358b507c5edd6e2f2cd8e7a147aab4d7dbdaf
1 parent 50167ad commit 0087e4a

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -175,12 +175,12 @@ Convergence Delta: tensor([2.3842e-07, -4.7684e-07])
175175
The algorithm outputs an attribution score for each input element and a
176176
convergence delta. The lower the absolute value of the convergence delta the better
177177
is the approximation. If we choose not to return delta,
178-
we can simply not provide `return_convergence_delta` input
178+
we can simply not provide the `return_convergence_delta` input
179179
argument. The absolute value of the returned deltas can be interpreted as an
180180
approximation error for each input sample.
181181
It can also serve as a proxy of how accurate the integral approximation for given
182182
inputs and baselines is.
183-
If the approximation error is large, we can try larger number of integral
183+
If the approximation error is large, we can try a larger number of integral
184184
approximation steps by setting `n_steps` to a larger value. Not all algorithms
185185
return approximation error. Those which do, though, compute it based on the
186186
completeness property of the algorithms.
@@ -224,7 +224,7 @@ in order to get per example average delta.
224224

225225

226226
Below is an example of how we can apply `DeepLift` and `DeepLiftShap` on the
227-
`ToyModel` described above. Current implementation of DeepLift supports only
227+
`ToyModel` described above. The current implementation of DeepLift supports only the
228228
`Rescale` rule.
229229
For more details on alternative implementations, please see the [DeepLift paper](https://arxiv.org/abs/1704.02685).
230230

@@ -286,7 +286,7 @@ In order to smooth and improve the quality of the attributions we can run
286286
to smoothen the attributions by aggregating them for multiple noisy
287287
samples that were generated by adding gaussian noise.
288288

289-
Here is an example how we can use `NoiseTunnel` with `IntegratedGradients`.
289+
Here is an example of how we can use `NoiseTunnel` with `IntegratedGradients`.
290290

291291
```python
292292
ig = IntegratedGradients(model)
@@ -338,7 +338,7 @@ It is an extension of path integrated gradients for hidden layers and holds the
338338
completeness property as well.
339339

340340
It doesn't attribute the contribution scores to the input features
341-
but shows the importance of each neuron in selected layer.
341+
but shows the importance of each neuron in the selected layer.
342342
```python
343343
lc = LayerConductance(model, model.lin1)
344344
attributions, delta = lc.attribute(input, baselines=baseline, target=0, return_convergence_delta=True)

0 commit comments

Comments
 (0)