Isn't there a lot going on already in insurance based on algorithms - and no explanation? I'm looking at scoring/rating... People getting a lower credit rating for living in the wrong neighborhood and other things, well that would at least be an identifiable reason, but I think that you cannot find out how exactly they arrived at your personal score? How transparent are banks and insurances for consumers, today?
Also on the HN frontpage right now is a link to a Guardian article "how to disappear from the internet", and the top comment in the forum there about his difficulties to deal with the results of identity theft, credit card debt, also shows a complete lack of transparency.
The lack of transparency is for a good reason: if a model's parameters become known, they can be gamed and lose their predictive power.
Not only do banks etc keep their models secret from customers, they keep them secret from other departments. The credit risk strategy team, for instance, won't want to risk customer service staff 'helping' customers alter their application details to get their scores over a cut-off.
(I used to run credit risk strategy, fraud, collections, operations etc for two credit card companies)
Giving customers of banks transparency of the models as a third party could be a good application of causal reasoning. Each individual customer of a bank only has their own parameters, and a yes / no output from the bank.
A third party designed to help customers get approved could aggregate data across multiple customers, generate hypotheses of what changes would artificially lower the bank's perceived risk for a customer (which would also require it understand what sort of changes customers can make easily), and test those hypotheses to refine a model.
It could optimise for revenue, paying customers for information, and receiving income if it succeeds in getting them approved.
It may not be transparent to the consumers. But, these algorithms are designed by actuaries and everyone in the company understands how it works. You need to have an audit trail of how the premium is calculated.
But in deep learning algorithms, features and co-efficients are not determined by humans. In most cases, it cant even be understood by humans. Without this understanding, I highly doubt if they will be accepted in regulatory industries.
Also on the HN frontpage right now is a link to a Guardian article "how to disappear from the internet", and the top comment in the forum there about his difficulties to deal with the results of identity theft, credit card debt, also shows a complete lack of transparency.