A powerful VENN Diagram - The Internet and Privacy

A powerful VENN Diagram - The Internet and Privacy

It’s true: the younger generation of 20 somethings over-share.

Not just their birthdates and addresses, but also what they listen to, movies they like to watch, people they like, where they hang out, places they go (or are planning to go to), businesses they used that were awesome, and those that “failed” in serving them.

Years ago, when I was in awe of the amount of personal information people shared voluntarily on social networks, I theorized that people over-share when they under-care. Or that people who shared more, had less to lose/hide; as in, if you’re young, you don’t have much in the way of personal secrets or financial assets that you’d want to protect yet, and you probably have less of a partitioned work/personal separation to worry about.

Today, I think this is a more nuanced phenomenon, and not necessarily strictly generational. On LinkedIn, for example, people across multiple generations share their work histories, their business affiliations; publicly share personal contact information, etc.

P is for Privacy

What is problematic, is when personal information is collected on behalf of a user, and somehow “shared” or “used” without their permission or their clear understanding.

Worse, yet, is when personal information is being used and shared with permission that the user granted, but they didn’t actually understand a) “what” piece of information they granted access to and b) “who” exactly they granted the access and “permission to share” to.

One typical response from colleagues in the information security community on this matter is that “people need to know their privacy rights,” but is that realistic or reasonable at this point?

Consider what happens when an average user installs an application on their smart Android device. They first search for an app, based on some search criteria inside an application store (such as Google Play). They read the description à look at a couple of screenshots à read some feedback à decide it looks good à install the app.

Here’s where it gets tricky.

Android presents the user with all the permissions the app in question is asking for.

But does the average user understand what does “providing this app with permission to ‘Phone’” actually mean? How about permission to access ‘Identity’? What does access to “SMS” actually mean?

Another key point: Psychologically, once we’ve made the decision to install an app, we turn into kids at a candy store…We just want to get to the goods as soon as possible. The app permission screen is just a pop-up.

And so, we select Accept.

Each of the permissions are essentially providing the application with access to pieces of our identity, some of which, we feel are private. And, had we known better, we’d probably be hesitant to select “accept”.

As my colleague Darren Platt put it:

“It may be more of a matter of convenience – where we’re all now just ‘accepting’ the privacy status-quo, because, as end-users, there is no way for us to negotiate such settings with application providers – it’s typically all or nothing, and the user either a) isn’t given a whole lot of options to make privacy decisions, or b) is overloaded with complex options to make such decisions.  Application providers should be required to be more clear about how they plan to use user data – not just that they ‘might’ collect the data. This way, users/consumers can make more well informed decisions.”

The end result of not understanding what we’re sharing? Here’s an example: Out of the blue, as a couple of weeks ago, I started getting alerts on my Android phone, about upcoming travel reminders on the day of a trip, with very detailed information about the trip.

Good stuff. Except that, this alert was derived by “some service” reading my personal and private emails, dissecting & extracting the traveller name, airline name and confirmation number from one of them, and then presenting the time of departure me as an alert, completely outside of the email application. Something, I can tell you firmly, I never asked for, or agreed to (or maybe I agreed to terms without understanding what they really meant, not sure which is worse!).

Or that I searched for a pair of shoes using my mobile browser, on Amazon, closed the browser, opened a completely new browser session the next day, only to start getting ads for shoes on my browser window, exactly matching the brand I was looking at the previous day.

Wait, what? When did I agree to having my search on Amazon be made public/available to advertisers? Again, I don’t remember granting this access, but I may have in fact done so, not knowing what the end-result would have been.

I’m not implying that all applications that require access misuse the users personal information (and trust), or that they’re universally out there to monetize our personal information on the Internet. And for the most part, granting such permissions are necessary for applications to provide the main functionality and service they provide.

However, in many cases, applications can still “function” with a fraction of the requested permissions, and then ask for additional permissions if/when the other features they offer get used by the user.

In other cases, the suspect requests for access to personal info are more obvious: Why does a mobile photo editing app that removes red-eye need access to my Phone calls?? Why would a stock quote app need to know my location at all times??

It’s time to look for more transparent and standards-based approaches for user-controlled or user-consent-based information sharing.

How can we return control of the personal info being shared, back into the hands of its rightful owner: the end-user?

The good news is that open standards exist today (such as OAuth 2.0) that can help us get there. For example, when a user needs to grant applications with permission to access “something” about/belonging to them, such standards enable using incremental authorization. This enables an application to first request for initial permissions from the user for those ‘core’ or really important features, and later, the same application can request for additional permissions on an as-needed basis. For example, Apple has implemented incremental authorization in iOS. Microsoft, Google and many other companies have also started supporting this approach for applications accessing their online APIs:

“For example, if your app allows users to save music playlists to Google Drive, you can ask for basic user information at sign-in, and later ask just for Google Drive permissions when the user is ready to save their first playlist. At this point, the consent dialog box asks the user only for the new permissions, which gives them a simpler, in-context decision to make.”

But even in the ad-hoc & as-needed basis permission model, I’d argue that the users are not really sure ‘why’ they are sharing specific private information and “where” is the final destination of the shared information.

Note that, mobile applications and even the device operating system can collect and share private user information and then facilitate sharing of that info among other entities, such as mobile/wireless operators, device makes, mobile operating services, advertisers and data analytics services.

With all the ‘hands’ involved, whom does the user go to, if they want to control their privacy (or lack there of) or at least have a better handle on what they’ve shared and with whom?

A couple of years ago, the FTC (Federal Trade Commission) published a paper, suggesting great guidelines for Mobile Privacy Disclosure.

Among the recommendations, here are some that stand out:

  • Consider developing a one-stop “dashboard” approach to allow consumers to review the types of content accessed by the apps they have downloaded;
  • Consider offering a Do Not Track (DNT) mechanism for smartphone users. A mobile DNT mechanism, which a majority of the Commission has endorsed, would allow consumers to choose to prevent tracking by ad networks or other third parties as they navigate among apps on their phones
  • Consider providing consumers with clear disclosures about the extent to which platforms review appsprior to making them available for download in the app stores and conduct compliance checks after the apps have been placed in the app stores;
  • Consider developing icons to depict the transmission of user data;

Seems that there are some good recipes and standards for strengthening user driven privacy controls. So this is not a technology problem. Now, its up to solution providers to incorporate such privacy controls and adopt the appropriate open standards into their products and services to ensure interoperability and transparency.

Should you as an end user care? 

Do users care enough about this privacy that they will be willing to take control of how their data is used?

As Darren says, “Today the answer is clearly: not so much – We accept the apps as they are, we install them without truly knowing what’s being collected and how it’s being used. We deal with privacy today the same way we react to traditional software EULA/license agreements. Click Accept, Submit. So while some of the up-and-coming standards provide the technical capability for companies to give users more control of how their data is shared, but I think a big question is whether users really want the hassle that is associated with making privacy decisions. Why is this important? Companies won’t provide these features, unless the users themselves demand it.”

Are mobile privacy control solutions similar to gluten free diet? Good for those people allergic to gluten, irrelevant to others? Is it a generational concern?

Would a dashboard telling you “who” has access to your private data, and letting you revoke that access, help increase your trust in mobile applications?

Would adopting DNT (similar to DND in the telco world) be a helpful tool in your hands?

Is shopping only from curated app-stores (moving the burden of vetting apps for privacy concerns onto app-store owners) a good idea?

Let me know your thoughts!

Special thanks to my colleagues Darren Platt and Salah Machani for contributing to this blog.

 

The post The P Word, in an Online World appeared first on Speaking of Security - The RSA Blog and Podcast

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics