Site icon Novalis

Feel secure with AI’s ability of policy reading

It is a fact that we never read privacy policies. And of course, that’s because they don’t seem to be actually written for you, or any of the other billions of individuals who click to comply with their inscrutable legalese. Instead, those millions upon millions of words are produced for the advantage of their authors, not readers—the lawyers who wrote those get-out clauses to shield their Silicon Valley employers.

But one cluster of academics has projected the way to read the privacy policies: artificial intelligence that is fluent in the fine print. Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan declared the release of Polisis—short form of “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any on-line service’s privacy policy, so you do not need to feel insecure.

In about thirty seconds, Polisis will read a privacy policy it’s never seen before and extract a readable outline, displayed in a graphic flow chart, of what kind of data a service collects, where that information might be sent, and whether a user will opt out of that assortment or sharing. Polisis’ creators have also designed a conversation interface they call Pribot that’s designed to answer questions about any privacy policy, meant as a sort of privacy-focused paralegal consultant. Together, the researchers hope those tools can unlock the secrets of how tech corporations use your information that has long been hidden in plain sight.

Plugin the website for Pokemon Go, for example, and Polisis will straightaway realize its privacy policy and show you the huge array of data that the game collects, from IP addresses and device IDs to location and demographics, as well as how those information sources are split between advertising, marketing, and use by the game itself. It additionally shows that only a small sliver of that information is subject to clear opt-in consent.

Polisis is not really the primary attempt to use machine learning to pull human-readable info out of privacy policies. To build Polisis, the Michigan, Wisconsin and Lausanne researchers trained their AI on a set of a hundred and fifteen privacy policies that had been analyzed and annotated thoroughly by a group of Fordham Law students, as well as 130,000 more privacy policies scraped from apps on the Google Play Store.

“Caring about your privacy should not mean you have to browse paragraphs and paragraphs of text,” says Michigan’s Schaub. But with additional eyes on companies’ privacy practices—even automated ones—perhaps those info stewards will think twice before trying to bury their information collection unhealthy habits beneath a mountain of legal minutiae.

For more great content like this, subscribe to our monthly newsletter:

[newsletter]

Exit mobile version