How To Visualize Dnns Dependent Of The Output Class In Tensorflow?
In TensorFlow it is pretty straight forward to visualize filters and activation layers given a single input. But I'm more interested in the opposite way: feeding a class (as one-ho
Solution 1:
The "basic" version of this is straightforward. You use the same graph as for training the network, but instead of optimizing w.r.t. the parameters of the network, you optimize w.r.t the input (which has to be a variable with the shape of your input image). Your optimization target is the negative (because you want to maximize, but TF optimizers minimize) logit of your target class. You want to run it with a couple of different initial values for the image.
There's also a few related techniques, if you search for DeepDream and adversarial examples you should find a lot of literature.
Post a Comment for "How To Visualize Dnns Dependent Of The Output Class In Tensorflow?"