Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.3: Explainer API #35

Merged
merged 8 commits into from
Mar 16, 2019
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update README.md
  • Loading branch information
Marco Ancona committed Mar 16, 2019
commit 472685efdd454b94d6ff52936f158c5a5769f52e
23 changes: 23 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,29 @@ de.explain('method_name', T, X, xs)

**Softmax**: if the network last activation is a Softmax, it is recommanded to target the activations *before* this normalization.

### Performance: Explainer API
If you need to run `explain()` multiple times (for example, new data to process with the same model comes in over time) it is recommanded that you use the Explainer API. This provides a way to *compile* the graph operations needed to generate the explanations and *evaluate* this graph in two different steps.

Within a DeepExplain context (`de`), call `de.get_explainer()`. This method takes the same arguments of `explain()` except `xs`, `ys` and `batch_size`. It returns an explainer object (`explainer`) which provides a `run()` method. Call `explainer.run(xs, [ys], [batch_size])` to generate the explanations. Calling `run()` multiple times will not add new operations to the computational graph.


```python
# Normal API:

for i in range(100):
# The following line will become slower and slower as new operations are added to the computational graph at each iteration
attributions = de.explain('saliency', T, X, xs[i], ys=ys[i], batch_size=3)

# Use the Explainer API instead:

# First create an explainer
explainer = de.get_explainer('saliency', T, X)
for i in range(100):
# Then generate explanations for some data without slowing things down
attributions = explainer.run(xs[i], ys=ys[i], batch_size=3)
```


### NLP / Embedding lookups
The most common cause of `ValueError("None values not supported.")` is `run()` being called with a `tensor_input` and `target_tensor` that are disconnected in the backpropagation. This is common when an embedding lookup layer is used, since the lookup operation does not propagate the gradient. To generate attributions for NLP models, the input of DeepExplain should be the result of the embedding lookup instead of the original model input. Then, attributions for each word are found by summing up along the appropriate dimension of the resulting attribution matrix.

Expand Down