Re: Transparency...
How a neural network arrived at an outcome (the algorithm, its parameters and its input) can easily be recorded. Understanding why the algorithm gave a particular output is much harder.
1. Data is used, with another algorithm, to set the parameters of the neural network. There are lots of parameters and they all depend upon each other and the data in complicated ways. Changing one parameter a little bit, may, or may not, have a big impact upon all of the other parameters. This algorithm takes a computer ages to run. It is a huge, perhaps impossibly huge for a human, job to find out and understand why the parameters are set the way they are.
2. Your input is fed to the neural network. It proceeds through the network with numbers growing and shrinking in complicated ways according to the parameters. At the end an output is generated. If you are really careful, you might be able to say that certain parameters fixed the decision and others weren't so important. With more effort you might be able to find out if the decision was robust to small changes in the parameters or your input. But what I think you really want is to know why the parameters are set the way that they are. In a complicated problem, nobody knows.