Making the claim “users want control” is the same as saying you don’t know what users want, you don’t know what is good, and you don’t know what their goals are.
I first started thinking about this during the debate over what would become the ACA. The rhetoric was filled with this idea that people want choice in their medical care: people want control.
No! People want good health care. If they don’t trust systems to provide them good health care, if they don’t trust their providers to understand their priorities, then choice is the fallback: it’s how you work the system when the system isn’t working for you. And it sucks! Here you are, in the middle of some health issue, with treatments and symptoms and the rest of your life duties, and now you have to become a researcher on top of it? But the politicians and the pundits could not stop talking about control.
Control is what you need when you want something and it won’t happen on its own. But (usually) it’s not control you want, it’s just a means.
So when we say users want control over X – their privacy, their security, their data, their history – we are first acknowledging that current systems act against users, but we aren’t proposing any real solution. We’re avoiding even talking about the problems.
For instance, we say “users want control over their privacy,” but what people really want is some subset of:
- To avoid embarrassment
- To avoid persecution
- … sometimes for doing illegal and wrong things
- To keep from having the creeping sensation they left something sitting out that they didn’t want to
- They want to make some political statement against surveillance
- They want to keep things from the prying eyes of those close to them
- They want to avoid being manipulated by bad-faith messaging
There’s no easy answers, not everyone holds all these desires, but these are concrete ways of thinking about what people want. They don’t all point in the same direction. (And then consider the complex implications of someone else talking about you!)
There are some cases when a person really does want control. If the person wants to determine their own path, if having choice is itself a personal goal, then you need control. That’s a goal about who you are not just what you get. It’s worth identifying moments when this is important. But if a person does not pay attention to something then that person probably does not identify with the topic and is not seeking control over it. “Privacy advocates” pay attention to privacy, and attain a sense of identity from the very act of being mindful of their own privacy. Everyone else does not.
Let’s think about another example: users want control over their data. What are some things they want?
- They don’t want to lose their data
- They don’t want their data used to hold them hostage (e.g., to a subscription service)
- They don’t want to delete data and have it still reappear
- They want to use their data however they want, but more likely they want their data available for use by some other service or tool
- They feel it’s unfair if their data is used for commercial purposes without any compensation
- They are offended if their data is used to manipulate themselves or others
- They don’t want their data used against them in manipulative ways
- They want to have shared ownership of data with other people
- They want to prevent unauthorized or malicious access to their data
Again these motivations are often against each other. A person wants to be able to copy their data between services, but also delete their data permanently and completely. People don’t want to lose their data, but having personal control over your data is a great way to lose it, or even to lose control over it. The professionalization and centralization of data management by services has mostly improved access control and reliability.
When we simply say users want control it’s giving up on understanding people’s specific desires. Still it’s not exactly wrong: it’s reasonable to assume people will use control to achieve their desires. But if, as technologists, we can’t map functionality to desire, it’s a bit of a stretch to imagine everyone else will figure it out on the fly.
Comments
You are wrong about healthcare. people want control in that they do NOT want to have this lame in network out of network doctors, they want to be able to see the doctors they are comfortable with period. You just don't get it!
The positive desire you describe is something useful for discussion: (a) people want to find a doctor they like, and (b) people want to have a long-term relationship with a doctor. Those are suitable discussion topics. My assertion here is that "control" is a very poor lens to discuss those issues.
There's a ton of concrete things that make it hard to achieve those goals. If you have to change plans and your doctor goes out of network, that's one problem. But there's also a dozen other challenges that are hard to categorize as "control" – for instance, we had to leave a pediatrician we liked because the scheduling system made it impossible to see her on short notice (i.e., with a sick kid). I have a hard time calling that an issue of control.
Furthermore, "control" is a way to avoid the real discussion. Do you want to take seriously the issue of creating and maintaining good doctor relationships? There's a ton to consider there. When you wrap it all up in "control" it's a way of letting everyone project their desires into a muddled mess, then use the confusion to reframe those desires into a preexisting agenda. The result is not honest and critical thinking.
Agree with this post.
It's easier to build products which give users what they want when users are aligned on what they want -- which is why on way to be successful is to target a well-defined audience. Then don't change that target without considering whether your decisions can actually serve both old and new users.
On a bit of a tangent, with the healthcare topic you evoked:
Agree that expecting end-users to exercise "control" is often a symptom of systematic failure to provide good defaults, or even good options of any sort. Obvious/intuitive, fair, good-quality and sustainable solutions should be presented, in a system that is set up humanely. There should be no "clever trick" to getting the right outcome to happen. All comers should be able to arrive at where they're trying to get, do what they're trying to do.
"Choice" and "control" are probably best to encounter when the options are of equivalent quality, or where quality isn't an (objective/measurable) factor. But so often "choice" and "options" are just there to exploit the fact that psychologically, people will agree to a situation that they feel the have chosen for themselves. A person will be less likely to get upset if they choose their own choice of pain out of several different flavors of it. (This analogy brings to my mind choosing a flavor of lollipop at the dentist, or at the doctor's when getting a flu shot.)
Yes, customization and choice should be present somewhere in a humane system. But it is good to ask "why" the choices we see are there. Healthcare in the United States is a series of Catch 22's. In many other developed countries, choosing a doctor (the basic "choice" most expect in healthcare) is possible, yet their systems don't suffer from dysfunction and waste like our system does... Notice when "choice" is used to distract from, and hide, corruption, placing blame on end-users, and not on shot-callers and grifters who have rigged up each of the options to be either poor-quality or expensive, (or, too often, both!) but generally inadequate for a fair and humane society.
tl;dr when "choice" is used to justify something controversial, first check if most/all the options on offer will be inadequate. Ask who really benefits, all things considered.
This is right on target. Users have goals in mind, and they aren't concerned with having a lot of choice in how to achieve them. If there is only a single path, they'll be glad to take it if it gets them there. (There is the obvious exception when the goal is personalization or something different, but here the choices are generally cosmetic.)
The best designs are based on limiting choice. Do I really need a manual choke for my car? Wouldn't it be better just to have it cut the fuel mixture as the engine warms? How many cars still have manual chokes, and how many people are clamoring for them?
It's interesting to see how often a single product succeeds because it reduces choices. Consider the Model T Ford or the Apple Macintosh or Social Security. If you've been experiencing choice overload when choosing a toothpaste lately, you'd know what I'm talking about.
Thank for you saying this.
I think "users want control" mostly just means "some users aren't happy with what they have".
If users are falling back on wresting control for themselves, that might mean they feel they need to control to be able to choose the least bad option. Or it might mean that only they can determine which option is best for them. Or it might even mean that users want a different choice from those provided.
Those are 3 wildly different meanings, and just saying that users want control doesn't give you any clue as to which one it is -- and even worse, it's ambiguous in a way that 3 people could each think something different, yet all nod and agree with the statement "users want control".
Yes, there are many different ways to go from the need for defaults.
Do you want defaults that favor sites that want to wrestle as much data out of users as possible, or ones that favor what the user likely wants.
How about a simple happy mechanism, you know like an HTTP header, that says the user would rather have the minimum amount of tracking necessary to provide services, and be notified when getting services that require tracking, and then honoring that, instead of just prompting the user with a long list of how we track you when they turn on that header.
Totally disagree. You sound like a mouth piece for a company trying to convince users to give up control.
" People don’t want to lose their data, but having personal control over your data is a great way to lose it, or even to lose control over it. " - If we flip this statement we get the reality "Companies don’t want to lose their data, but letting users have control over their data is a great way to lose it, or even to lose control over it. "
It seems like you're missing the point, or at least my interpretation of it. This post isn't about poor behavior on the part of companies. It's about a way to dig deeper into what users want than just "control" so that you can fashion solutions that really get at what users want.
That "reality" you paint is just user-hostile behavior, even as described by elements of the list of other ways to think of "control over data": "They don’t want to delete data and have it still reappear".
A company may think they're "giving users control over their data" by providing an export function. What Ian's getting at is that "control over data" means a bunch more things, including allowing users to delete their accounts and _to really delete it_.
I agree on both parts on one part.
Simply put, have more sane defaults instead of relying on a huge list of opt-outs.
If a privacy invasive function isn't necessary to drive features it should opt out by default unless the user gave the universal opt-in for the broad category (analytics, targeted advertising, etc.).
If it is necessary, the user should be prompted once in a non-intrusive manner to inform them that enabling the feature might reduce their privacy with the service.
The problem is that the current way to "give the user control" is completely wrong.
There are currently two options:
A massive page to review settings, with no "defaults" or "sort by function"
or worse yet:
Settings scattered all around the site/product with no central privacy at all.
Simply put, users only want control if doing so is non-intrusive, and intuitive, with strong inherited defaults and simple user-friendly explanations when overrides are needed.
We need to end the practice of simply shrugging and expecting users to choose between signing away their rights, or going through a several hundred entry fine-grain control list. There has to be a happy medium between these two options, even if the long list should remain for those who ACTUALLY want to override it.
For instance, I would like to whitelist a few "well behaved" advertisers to target my advertising and collect analytics on me, while listing all others to default to not to. A searchable fine-grained list would be useful for this.
On privacy there are some common sense rules that should always be followed.
However, there are some options some users want and some don't.
For instance, I like having the ability to tell an ad engine not to show me more ads of a particular type.
However, that involves them giving me a UUID or flagset so they can do that. I'm fine with that, because honestly, they are not that into me. However other people would not like the idea of an advertiser knowing so much about their preferences.
I do, however, prefer to opt out of tracking by any ad company that doesn't offer this feature.
In other words, in this case, it is an opt in.
To put it simply, all cases of users wanting control are opt-in to the more invasive option. The opt-out option is a cop-out.
Some privacy-invasive features are really useful. For instance if I search for a brick and mortar establishment, Google using my location is nice. However, this is nothing that can't be done with an additional per-search option. Also, even without location information I can just specify it manually in the search terms.