Visualizing Learning of Domain-Specific Knowledge in Neural Networks
This visualizer displays how the neural networks has learnt the physics laws that govern the observed data when it is not explicitly enforced and when it is explicitly enforced. Burgers’ equation (ut+uux–(0.01/π)uxx = 0) is the fundamental partial differential equation for this demo use case. The accuracy of neural network predictions are typically represented as error (|uPredicted – uObserved Data|) and rendered as plot series (uObserved Data vs (x, t), uPredicted vs (x, t)) as seen in uerror bar plot and u vs (x, t) line plot. While u is a representation of the conformance of the model with respect to the PDE, it is not a direct measure of the same. Our visualizer observes and renders the learning of domain-specific knowledge (PDE) in neural networks more precisely. The learning of Burgers’ equation in neural networks is represented by two series for given x: Series with ut vs (x, t) and Series with -uux+(0.01/π)uxx vs (x, t). The area between these two series represents the non-conformance of model with the Burgers’ equation. [Source Code & Data]