Extracting Secrets from Machine Learning Systems
This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.
EDITED TO ADD (3/9): Some interesting links on the subject.
Clive Robinson • March 5, 2018 7:49 AM
As the introduction to the paper says,
Once a secret is learned..
It has implications. Think about how neural networks work, learning adds weight to some paths over others. So in effect the secret once learned is etched in those pathways. Thus getting the secret out would be a case of finding the weightings and reasoning out what they mean.
Whilst not of necessity a simple problem to solve it does highlight the problem with all such systems.
It thus gives hope to the issue of bias in AI justice models, once public people can find the biases within them and if they have been harmed seek restitution. Thus fact alone should give pause for thought by those making the purchasing decisions, they could be buying a costly liability in more ways than one.