New! Sign up for our free email newsletter.
Science News
from research organizations

New technique wipes out unwanted data

Date:
March 17, 2016
Source:
Lehigh University
Summary:
Machine learning systems are everywhere. They predict the weather, forecast earthquakes, provide recommendations based on the books and movies we like, and even apply the brakes on our cars when we're not paying attention. To do this, software programs in these systems calculate predictive relationships from massive amounts of data. Two researchers have developed a way to do it faster and more effectively than can be done using current methods.
Share:
FULL STORY

Machine learning systems are everywhere. They predict the weather, forecast earthquakes, provide recommendations based on the books and movies we like, and even apply the brakes on our cars when we're not paying attention.

To do this, software programs in these systems calculate predictive relationships from massive amounts of data. The systems identify these predictive relationships using advanced algorithms -- a set of rules for solving math problems -- and "training data." This data is then used to construct the models and features that enable a system to determine the latest best-seller you wish to read or to predict the likelihood of rain next week.

This intricate process means that a piece of raw data often goes through a series of computations in a system. The computations and information derived by the system from that data together form a complex propagation network called the data's "lineage." The term was coined by Yinzhi Cao, an assistant professor of computer science and engineering, and his colleague, Junfeng Yang of Columbia University, who are pioneering a novel approach to make learning systems forget.

Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe that easy adoption of forgetting systems will be increasingly in demand. The two researchers have developed a way to do it faster and more effectively than can be done using current methods.

Their concept, called "machine unlearning," is so promising that Cao and Yang have been awarded a four-year, $1.2 million National Science Foundation grant to develop the approach.

"Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity," said Cao, a principal investigator on the project. "These systems must remove the data and undo its effects so that all future operations run as if the data never existed."

Increasing security and privacy protection

There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one.

After Facebook changed its privacy policy, many users deleted their accounts and the associated data. The iCloud photo hacking incident in 2014 -- in which hundreds of celebrities' private photos were accessed via Apple's cloud services suite -- led to online articles teaching users how to completely delete iOS photos including the backups. New research has revealed that machine learning models for personalized medicine dosing leak patients' genetic markers. Only a small set of statistics on genetics and diseases are enough for hackers to identify specific individuals, despite cloaking mechanisms.

Naturally, users unhappy with these newfound risks want their data, and its influence on the models and statistics, to be completely forgotten.

Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software. In order to positively identify an attack, the system must be taught to recognize normal system activity. Therefore the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data is identified, the system must completely forget the data and its lineage in order to regain security.

Widely used learning systems such as Google Search are, for the most part, only able to forget a user's raw data -- and not the data's lineage -- upon request. This is problematic for users who wish to ensure that any trace of unwanted data is removed completely, and it is also a challenge for service providers who have strong incentives to fulfill data removal requests and retain customer trust.

Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the "right to be forgotten" ruling issued in 2014 by the European Union's top court. In October 2014, Google removed more than 170,000 links to comply with the ruling, which affirmed users' right to control what appears when their names are searched. In July 2015, Google said it had received more than a quarter-million such requests.

Breaking down dependencies

Building on work that was presented at a 2015 IEEE Symposium and then published, Cao and Yang's "machine unlearning" method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch.

Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage no longer requires rebuilding the models and features that predict relationships between pieces of data. Simply recomputing a small number of summations would remove the data and its lineage completely -- and much faster than through retraining the system from scratch.

Cao believes he and Yang are the first to establish the connection between unlearning and the summation form.

And, it works. Cao and Yang tested their unlearning approach on four diverse, real-world systems: LensKit, an open-source recommendation system; Zozzle, a closed-source JavaScript malware detector; an open-source OSN spam filter; and PJScan, an open-source PDF malware detector.

The success of these initial evaluations has set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.

In their paper's introduction, Cao and Yang say that "machine unlearning" could play a key role in enhancing security and privacy and in our economic future:

"We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks.

"We envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership."


Story Source:

Materials provided by Lehigh University. Original written by Lori Friedman. Note: Content may be edited for style and length.


Cite This Page:

Lehigh University. "New technique wipes out unwanted data." ScienceDaily. ScienceDaily, 17 March 2016. <www.sciencedaily.com/releases/2016/03/160317152636.htm>.
Lehigh University. (2016, March 17). New technique wipes out unwanted data. ScienceDaily. Retrieved December 21, 2024 from www.sciencedaily.com/releases/2016/03/160317152636.htm
Lehigh University. "New technique wipes out unwanted data." ScienceDaily. www.sciencedaily.com/releases/2016/03/160317152636.htm (accessed December 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES