Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

What if someone offered $6,495 for your private data? Would you sell?

What follows is a fictionalized vision of a possible future filled with Data Exchange Networks (DENs) designed to bring the process of private data collection out into the open.

. . .

February 5, 2029

As a fractional research scientist, Lynn Thomas uses her talents to aid a number of clients – from University labs who need an extra set of eyes on experimental design, to corporate R&D departments conducting optical glass experiments, to startups working on new protein-based sweeteners. In 2028, she managed six retainer clients (including one startup where she took equity instead of cash) and felt like she earned a good living. 2029 looks just as good.

But her experience working for an energetic founder infected her with the startup bug. Lynn has had her own idea for a new type of photovoltaic paint since she first read about the idea as a graduate student.

It’s time, she thought. She needs to put up or shut up.

The problem is money.

It’s always money with startups, and that’s especially true in the hard sciences. At this early proof of concept stage, she doesn’t need much money, but enough to purchase the synthesizing equipment, raw materials, and lab time. She figures about $4,000 will cover it ­– $5,000 to be safe. She’s too early for angel or venture capital funding. She’s also too early for legit crowdfunding sites. They want a promise of a deliverable at the end. She’s doing some early stage science. She has no idea if anything will come of her work. It’s too risky. She is on her own.

How will she do it? Take on another client? No. She’s already maxed out. And if she does, she won’t have the spare time she needs. Luckily, she has another option. Thirty years ago, she might have begged friends and family for the spare cash she needed to fund her startup.

In 2029, she has the option to sell her private data.

#

Lynn Thomas prides herself on her rational mind. It got her a scholarship to a private high school, internships at the National Institutes of Health, two master’s degrees paid for by corporate sponsors, and a Ph.D. from Oxford. Still, selling private data on a Data Exchange Network (DEN) still seems a bit sketchy. She had a friend who used one … that DEN ended up selling his data to a dating site, much to the chagrin of his partner. Other DENs are known for bombarding you with advertising. Most DENs don’t pay very well. It’s the last fact that’s the real problem.

But one does pay well: The MENSA DEN.

Perfect, she thought. MENSA made the decision ten years ago to begin cashing in on its membership base. However, they couldn’t simply sell member data. Not only was their data set not as detailed as they thought it might be, their average member was too smart to let them do it without getting paid. (Makes sense, huh? They are MENSA members.) So, MENSA cut a deal: You let us market your data to interested parties, and we will share the revenue with you. Members decide what to share (and what not to). A sophisticated auction market will determine the prices paid. It’s smart, fair, and rational.

Lynn was a MENSA member. That meant she could give the MENSA DEN a try. What did she have to lose?

#

“Siri, open the MENSA DEN,” Lynn said.

“Okay, Lynn. I found it,” the automated voice replied. “The MENSA DEN checked your records and confirmed that you have an active membership in the MENSA organization, but not a DEN account. They say you need to complete a profile before you can enter the marketplace. Do you want to proceed?”

“What kind of information do they want?”

“I’ll check. They say they want some basic demographic information, most of which you already provided in your organization membership. Specifically, they’re missing your current physical address, gender identifier, biological gender, and family status.”

Ugh. Lynn thought. That’s already more personal than she was hoping for. But she swallowed her discomfort and continued. Eye on the prize she thought.

“Ask them what security measures are in place.”

“Good question, Lynn. It seems like they anticipated that. I have a full encryption schematic you can view on the main screen. It’s similar to the one you and I use to communicate: Two-stage blockchain with polynomial and fractal encryption. It’s not perfect, but the task of breaking it would require a dedicated government-level quantum super-computer running for 82.5 hours. The risk of a breech seems reasonable.”

“Agreed. Let’s go. But set a reminder to change our MENSA DEN credential password every 60 hours or so.”

“Smart precaution. Done. I’ll now open the secure link.”

Lynn proceeded to share her physical address, her gender identifier (her/her’s), her biological gender (female), and family status (living alone, no children).

Deep breath, she thought. I’m in.

#

“Okay Lynn, the MENSA DEN found seven offers for you to consider. I’ve posted them to your mobile screen. Where would you like to start?”

Hmm, Lynn thought.

That’s more options than she imagined there might be. Siri asked a good question. Where do you start on a journey like this? You’re selling a part of yourself to the highest bidder. “Social media” seemed like the easiest place. Fewer people share personal details on those sites, especially since Facebook imploded. Today, most people use any number of “Virtual Reality” or “VR” social networks to meet up with friends around the world. You have to pay to use most of those. What could they want? Lynn thought.

“Let’s start with social media. I’m interested in what they’re offering,” Lynn finally responded.

“Good choice, Lynn. The first is a scientist-specific VR meetup group. They were founded in Kuwait and have been trouble attracting female members. Your profile fits their criteria and they are willing to bid $12.50 per month for you to log in at least three times for 30 minutes each during the month.”

Lynn did the quick math. $12.50 for 90 minutes was less than $10.00 per hour. More to the point, it would take 33 years to make the $5,000 she needed. But perhaps there was other value to be had. Maybe she could build relationships with other scientists and collaborators along the way?

“Siri, go ahead and counteroffer with $30.00 per month, same time commitment.”

“Understood. I’m submitting the bid now.”

There’s no way they’ll…

“Response received. They countered with $25.00 per month for four sessions. They’ll pay the first month in advance.”

Better. Not great, but better. Lynn considered for a moment.

“Go ahead and accept that offer. Let’s keep looking.”

“Okay, let’s move on to an easy one,” Siri responded. “I have 15 businesses in your area that will provide discounts for dinners, events, and performances if you allow them to track your physical location whenever you get within 10 miles of their facility. I’ve added the list to your mobile screen along with a map overlay of your typical travel patterns. Only six of them overlap.”

Lynn examined the map. Siri was right. Six of the 15 were in her daily routine. She touched the screen in four places.

“Let’s go with these four,” Lynn decided.

“Confirmed. Where to next?”

Another good question. So far, Lynn realized she only accepted offers for $25 (per month, yes, but only $25 today) and four dinner coupons. Not so good.

“Siri, let’s re-sort the list from largest potential revenue to smallest.”

“Okay, I finished re-sorting your list. The largest opportunities are in the health information category. I’ve taken the liberty of cross-referencing the opportunities list with your private genetic workup. The results are on the main screen.”

Lynn looked up. Ah, there we go. Here’s the bigger money. She examined the details on the screen.

The first opportunity was a breast cancer clinical study based on her unique BRCA variant gene for $3,250. She would be part of a control group, meaning she wouldn’t have to do anything other than keep doing what she was doing. And as a bonus, she would get to read the resulting research.

The second opportunity was a pharmacological study on a synthetic cannabis derivative. This one was a “double-blind” study, meaning she would not know what she was getting, and neither would the researchers. There was a link to a 32-page disclosure and waiver document. They were offering $2,750.

The third was a biofeedback device that used light therapy to lower cholesterol levels. Since she inherited a gene that correlated with high-LDL levels from her mother, the researchers would double the normal payout of $750 to $1,500. She would need to use the device as directed (and tracked via an IoT connection) for three months and complete twice-monthly blood tests.

This was a tough decision. If she said “yes” to all of them, she would have all the money she needed…and more. But they weren’t created equal, and none would accept counter offers. It was a “take it or leave it” situation.

“Okay Siri,” Lynn said after a long minute. “Let’s accept the gene study and the biofeedback device. I’m not comfortable with the risks in the cannabis study.”

“Understood. The contracts are accepted. You will receive detailed instructions via a VR-mail later this week. Should I give the cannabis study authors the reason for your rejection?”

“Sure, tell them I’m not comfortable with the risks of not knowing what I’m getting. They could have been more clear, up front, on protections.”

“Understood. Feedback submitted. If they answer your questions, are you willing to reconsider?”

“No, I don’t think so. Mute their responses.”

“Will do.”

Over the course of the next 20 minutes, Lynn walked through a number of other auctions and offers. Siri knew Lynn was a “gig worker” and removed any explicit job offers disguised as information sharing. Lynn did consider one that was essentially a beta test of new lab software … but she had enough on her plate. She instructed Siri save that one for 30 days.

One interesting organization wanted her complete purchase history of all food and beverage products for the past 18 months. They offered $300, but Lynn negotiated the initial offer and closed the auction at $445. What the heck? It was just “food” and not “all purchases,” so the risk was low. And besides, they offered to share research findings with her that we personalized to her habits. She didn’t need to lose any weight, but she has been working on improving her muscle density. Who knows? Maybe she’ll learn something useful.

Three religious organizations wanted her to donate her information so they could better profile target members. She turned them all down.

The political organizations were a different story. The two major parties wanted free information (another “no”), but science-focused interest groups wanted her research notes to write up case studies to teach young people about the scientific method. They had a grant from the National Science Foundation, and they were offering $425 per unpublished lab book. She was under NDA with two of her five projects that qualified, but she accepted the others.

#

“Ok, Siri. Where are we at?” Lynn asked.

“I calculate $6,495 in total accepted contracts, with $25 per month continuing until you cancel the VR meetup group participation with the Kuwaiti-based organization. Do you want to continue and expand your search?”

“No, that’s all for now. Go ahead an exit the MENSA DEN, but remind me to check back in 90 days.”

“Will do. Signing off.”

Lynn felt a sign of relief. She had more than enough capital to begin her work – almost 50% more than she needed. She remembered the advice of a graduate advisor: Always assume your research will take twice as long and cost twice as much. If you do, you’ll be covered. She didn’t quite get to twice her initial figure, but she felt good.

“Ok Siri, let’s go shopping for lab equipment…”

#

Obviously, this is a thought experiment. Lynn Thomas isn’t a real person (yet). It’s not 2029 (yet). Privacy isn’t explicitly for sale in this way (just yet…or is it?).

I have a message for entrepreneurs reading this and wondering how the brokerage service could earn trillions of dollars as the secure intermediary in these transactions: Why aren’t you working on it?

I have a message for consumers reading this and wishing they could finance their dreams using assets they already own…but would be willing to sell under the right circumstances: Why wouldn’t you?

And finally, I have a message for all those tech leaders who feel that consumers will continue to give away private information for free because of your “unicorn” technologies: They won’t.

Lynn’s world is coming. It’s about time we all caught up.

#

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

#

Photo license obtained: Shutterstock

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

You don’t have a right to privacy. You have something better.

What if there was no right to privacy?

That question triggers a surge of righteous rage in many people, especially in the Western world. We rank “privacy” right up there with “free speech” and “freedom of worship.” But as we’ve seen (especially in the past 20 years of the information revolution), the notion of privacy has morphed into something more complicated.

To those who lived through the transition from pre-information to post-information eras, this new reality catches us off guard. In the 1980s, privacy was easy. You knew if you were anonymous. You chose to go public. But in 2019, privacy is challenging. Much of the time, you can’t tell if your actions are public or private – surveillance cameras, GPS trackers, and web tracking is so common that the average person could spend their entire day reading privacy policies and never understand half of it.

At the root of the anger is a contradiction: We want the benefits of modern technology without the intrusion to privacy they require. We don’t want our cars to know where we are … but we want GPS navigation. We want low health insurance rates … but we don’t want to share our dietary and exercise habits. We don’t want advertisers listening in to our conversations … but we want the best deals on products and services tailored precisely to us (without having to endure all of the other advertising).

To put it more simply, privacy is like celebrity – we want the kind of either that we can use when we want something, and then turn it off when we don’t. We want enough “celebrity” to get a good table at a busy restaurant … but not enough to get followed by paparazzi. We want enough “privacy” to keep our political beliefs to ourselves … but still get access to Facebook and Google.

Ask any true celebrity. You can’t have both.

It’s the same with privacy. There is no free lunch.

The issue is how we’ve defined privacy. Merriam Webster sums it up quite well:

privacy | ˈprīvəsē |

noun

  • the state or condition of being free from being observed or disturbed by other people: she returned to the privacy of her own home.
  • the state of being free from public attention: a law to restrict newspapers’ freedom to invade people’s privacy.

That definition didn’t burst forth from the earth fully formed. It has a basis in law in the United States. Although the word “privacy” appears nowhere in the US Constitution, federal and state privacy laws cover plenty of ground. We can categorize privacy into four main groups:

  • Intrusion of solitude: physical or electronic intrusion into one’s private quarters (usually, that means your home, but it can mean other private spaces as well, such as bathrooms and your car).
  • Public disclosure of private facts: the dissemination of truthful private information which a reasonable person would find objectionable (the modern practice of doxxing falls into this category, and it is illegal in some places).
  • False light: the publication of facts which place a person in a false light, even though the facts themselves may not be defamatory (libel and slander laws fall into this general area well, and it gets complicated).
  • Appropriation: the unauthorized use of a person’s name or likeness to obtain some benefits (aka impersonating someone else).

Many states build on federal statutes with their own, more restrictive, laws. Many of those state laws cover technological intrusions explicitly.

With GDPR, the European Union went even further, creating an entire legal framework specifically addressing a modern concept of privacy in a technologically-powered world. It’s a new set of rights and rules that apply to everyone in the EU (as well as a limited set of rights for everyone else).

In many other countries, almost the opposite situation exists. In many places, the concept of privacy is subsumed by the interests of the state. China comes to mind immediately, but it is hardly the only one. Those countries made the choice that benefits of total surveillance outweigh the desires of their population to keep to themselves.

But beyond the legal frameworks and philosophies, the concept of privacy varies by generation. People who lived before the information revolution see privacy differently than those born after it started. Younger people tend to accept the tradeoffs more readily, or at least they don’t think about the downsides quite so much until something very negative occurs (online bullying as an obvious example).

We have to wonder: If privacy can vary so much by law, by country, by culture, and by generation, logic holds that privacy cannot be a “natural” right.

If that’s true, whatever gave us the idea that we have a “right” to privacy?

 

Remember taxation without representation? Today, privacy is like exposure without consent.

Before the privacy equivalent of the Boston Tea Party breaks out, a (very) brief (and oversimplified) history lesson is in order.

The concept of “privacy” is a new idea from a historical context. Pre-agriculture hunter-gatherer bands never had privacy. They traded it for security of the group. The first cities weren’t much better. Rulers of those small enclaves knew who lived there and much of what went on for their own survival. It was only when cities became giants (in the latter half of the 19th century) was anonymity possible – and therefore – a modern concept of privacy could develop … and then only for the privileged.

But it wouldn’t last. The beginning of the 20th century saw the emergence of the “social contract” – older workers living off the resources of younger ones, universal health care (in some countries), and shared defense/sacrifice. Even then, while you may have needed some sort of government identification, you could (for the most part) live “off the grid,” even deep in the city. In fact, that was part of the appeal of “the big city” for many people. The more people who lived in a given area, the less likely you will be noticed (if you chose not to be).

That all changed with the advent of the internet and has been accelerating ever since. In some cases, we gave up our privacy willingly for greater social connection (Facebook comes to mind). In other cases, we gave up our privacy unwittingly for the implicit promise of better products and services (Google comes to mind). We can cite hundreds of other examples. But while there are definite downsides for this new era of interconnectedness, in most cases, we gave up our privacy for the better quality of life these technologies offered.

Here’s the catch: To function, the technologies require ever-increasing transparency. You can’t remain completely private and still retain all the benefits.

For only a brief window in recent history has there been any true concept of privacy based on the choice to remain anonymous. During that short time, we tasted privacy, we liked privacy, and now we feel that privacy is slipping away.

In other words, privacy as we have defined it and as we understand it is a myth. It’s our poor definition of privacy that sits at the root of our frustration.

It’s time we redefined it.

 

Privacy as a right versus privacy as an asset.

Let’s consider a new definition of privacy as an asset:

asset | ˈaset |

noun

  • a useful or valuable thing, person, or quality: quick reflexes were his chief asset | the school is an asset to the community.
  • (usually assets) property owned by a person or company, regarded as having value and available to meet debts, commitments, or legacies: growth in net assets | [as modifier] : debiting the asset account.

What happens when we do that? Let’s highlight the key differences:

  1. Privacy as a “right” – the state or condition of being free from being observed or disturbed
  2. Privacy as an “asset” – a useful or valuable thing, person, or quality

Do you notice something about the first definition? As a “right,” privacy is something others grant us as individuals. It is the “condition of being free from intrusion.” Do you notice something about the second definition? Privacy is a thing of value that you own. It is a “useful or valuable thing.”

That simple shift makes all the difference.

Our new definition transforms privacy from something others control (they choose not to intrude on us) to something you control (you choose to protect your asset). This may seem like a trivial distinction, but it’s not.

Privacy is still all about choice. It’s simply a matter of whose choice. Shouldn’t it be you?

It may seem odd to think of privacy that way at first: You can’t own a bushel of privacies. There is no stock market for privacy securities. You can’t pay my mortgage with my privacy account. But that’s because we’re confining the definition of an asset as something tangible. But assets are not simply physical objects. The real value of privacy in the information age is information itself. That’s all privacy is – an information asset.

When we begin to think about privacy as an information asset, we see immediately a number of benefits:

  1. Instead of an abstract right, privacy as an information asset has measurable value. In other words, we can convert privacy into information that could be sold, traded, or invested.
  2. The act of quantifying our privacy and organizing it into categories illuminates its value. In other words, privacy is a set of assets available for your personal exploitation and benefit.
  3. Because privacy is a quantified asset, it’s also divisible. That means there’s more to privacy than “all or nothing.” You can choose some information to remain private, some to share, and some to sell or invest.

What does that mean in a real situation? You can decide to give away your private information to use Google Maps or Alexa. You can weigh the pros and cons. The choice not to use one of these services may be difficult or costly, but it is your choice.

 

Your privacy information asset portfolio.

At this point, many people are confused. That’s natural. Yes, we follow the argument: (1) privacy is a modern creation; (2) privacy (as we know it) is eroding quickly in the face of technological innovation; and (3) it is more useful to think of privacy as an information asset rather than some sort of inalienable right.

The rational argument isn’t the confusion – the implication of redefining privacy is unclear. In other words, how do we manage privacy in our day to day lives?

Privacy is unlike other assets. Sometimes, it is quantifiable like money (e.g. your credit score information), but often it is not (e.g. the value of your religious affiliation). Sometimes privacy exists on a spectrum (you can share a little personal information on Facebook, but not everything), but often it is a binary choice (you have shared your location data, or you haven’t).

The confusion is natural.

Information is such a new type of asset that we can be forgiven for wondering how to think about it. Each type of data becomes part of your privacy information asset portfolio. You get to choose how to invest your assets to achieve your objectives. But to invest with confidence, we need clarity on the assets in our portfolio. Let’s explore those assets and how you might decide your allocation strategy:

Social Data

Social data is an easy start. If you use Facebook (and most people at least have a profile), you’ve shared at least some social data. In return, those services provide a way for you to stay connected with family and friends. If they’re free services (and most are), your privacy assets are the product you’re selling in return for those services. If you’ve ever felt like you don’t get much in return for social networking, what you should start saying is I am paying too much for this. Remember, just because you’re not exchanging money, doesn’t mean you’re not exchanging value. Consider switching to a paid social network such as Premo Social. I know people who’ve done it. The modest cost of those services allows you to retain additional privacy, and in effect, “pay” less.

Location Data

This is another easy one…especially in the past ten years. Most (if not all) modern cars have GPS trackers. That technology allows automakers to offer emergency services and car rental agencies the ability to track their car after you rent it. Many also feature built-in navigation systems. All modern smartphones have the same GPS location functions, allowing Apple, Google and others to offer driving, transit, and walking directions to wherever you want to go (not to mention to share that data with other apps). These functions are so common, that you can be found by someone almost anywhere you go. Consider learning how to turn off location services when you don’t want to be tracked. Practicing this habit will force providers to ask you to turn them on and make you aware of just how often your location is being shared. If they want your information, they should make a compelling offer of value. If not, just say no.

Purchase Data

If you’re like most people, you make a lot of purchases from a lot of different providers. Who has that information? Banks, sure. Credit cards, them too. Amazon, yes, but less than you think. How about your corner market, Uber, or Amtrak? You may use a combination of credit cards, checks, online bill-pay, cash, and gift cards. Today’s reality is that no one provider knows your entire purchase history, only you do. Services such as Mint are trying to give you greater visibility in your spending by aggregating as many of these different sources as possible. Even if you don’t sign up for one of these services, it’s worth understanding how they work and the value the bring. When one of them will offer to pay you for your data (instead of offering the service for “free”), you’ll be ready to decide.

Financial/Credit Data

Here’s the basic idea behind the credit rating agencies: You’re trading this aggregation of data for the ability to maintain a “credit score.” You can opt out in many cases (or pay in full, in cash, immediately, for absolutely everything), but a credit score is the inevitable consequence of living in a modern economy. (It’s also useful for borrowing money when you need it.) Do you think about your private credit history as an asset to be managed? You should. Frankly, it’s more constructive than feeling powerless when they make a mistake. You wouldn’t let your bank misplace half your paycheck without making a phone call, would you? Well, have you checked your credit report (for free)? You probably should.

Health Data and Biometrics

This is a bigger category than you may realize. Yes, health data includes your medical records (test results, family history, doctor visits, etc.), but it also the biometric data captured by your Fitbit, Apple Watch or smartphone (number of steps, diet choices, blood pressure, heart rate, etc.) In the future, and in some cases today, you will be able to take advantage of your good habits to negotiate lower insurance rates or sell this information to medical innovators. That’s especially valuable if you have got an odd genetic trait or family history. But until there are better protections in place, be careful about sending away for a “low cost” or “free” genetic screening. In the meantime, you can consider signing up for paid pharmaceutical and medical device trials.

Image, Video, and Voice

Pictures of you (or pictures you take), videos of you (or videos you take), and even the sound of your voice have much more value than you realize. Those photos and videos have value. Instead of a free social network, why not post them to a photo/video sharing network where you could earn some money? Voice is the next generation of human-computer interface, and Silicon Valley is racing to get better at this. They’re being coy about telling you just how much they’re collecting and analyzing because they’re hoping you’ll give it to them for free or for the “use of their product.” Make them give you more for it.

Employment

LinkedIn gets your detailed career history and job-hunting desires for free (are you seeing a pattern here yet?) But with more people become “remote,” “virtual,” or “gig workers,” the traditional linear career path will cease to exist. Your job history is more than a series of employers. Your career successes are simply another series of information asset – the entirety of which only you know. Gig job markets may give you a better idea of your true value than a salary benchmark website such as PayScale.com.

Political and Religious Affiliations

Of all the types of private information people have, political and religious information is also the type we’re most likely to give away for free. It may seem counterintuitive, or downright wrong, to think of these pieces of information as “assets,” but bear with me. Don’t think about them in terms of money, think in terms of value exchange. Is it worth it to you to support a political cause? And worth the risk of someone not being your friend because they know that? Then by all means, share that information. The same goes with your faith, although in a more complex context depending on the creed.

 

Defining privacy as an asset demands being intentional with your choices.

That word intentional is critical. When we think of “rights” we think of something we were born with – that’s where the word birthright comes from. We value rights, but mostly in an abstract sense, and often not unless we’re threatened with losing one.

By contrast, when we think of “assets” we think of something we acquire, earn, and use for our own benefit. If we don’t, we’re being wasteful. That waste can translate into actual money, yes, but we also can waste our relationships, or time, or our happiness.

In the modern world, no matter what Google or Facebook may tell you, there is no free technology. There is always an exchange of value. Most of the time, your privacy is the most valuable asset in the equation.

But now, you should realize that you are in complete control. You simply need to take it.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

Categories
Long Form Articles Rehumanizing Consumerism

Tim Cook, maybe Apple should be paying *me* to use the iPhone X

Why would you not want an iPhone X?

I found myself asking that question a few weeks ago after my venerable iPhone 6 started to act up. I bought it not long after it was released (late 2014, if memory serves) and it finally began to give out – software freezes, battery issues, malfunctioning audio jack – likely the result of one too many drops.

Of course, the Apple store was busy (it usually is), and I had plenty of time to wander around. While I waited my turn, I couldn’t help but browse the multiple rows of multiple variants of the newly-updated iPhone X.

The screens were nice, but not much nicer than what I had. The non-existent home button took a little getting used to, but it wasn’t too difficult. The days of physical buttons are coming to a close, and Apple’s software-based solution seems both reasonable and elegant. The device felt solid and well made. Beautiful, really.

Nevertheless, I wasn’t persuaded, and I couldn’t put my finger on it.

If you find yourself wondering the same thing, I intend to help you understand why the iPhone X(or Xs, or Xr, or Samsung Galaxy s10, or any other “new tech”) may no longer hold much appeal … and what might need to change to make it more interesting for you.

Let’s get started by using a simple feature on your iPhone that most people don’t know about.

Under the “Battery” menu, there is a handy feature that gives you an idea of how much time you’re spending with each function on your phone and how much of your battery it’s eating up.

Here were my personal results over the past 10 days:

  • Mail (27%): That seems to make sense. My iPhone is (primarily) a business tool.
  • Phone (24%): I actually use my iPhone as a phone. Weird, right?
  • LinkedIn (19%): Okay, that made sense too. LinkedIn is a key platform to reach people interested in my writing and research, and I spend plenty of time interacting with people there.
  • Audible (6%): I have irrational love for all books, audiobooks included. And because I listen primarily when I visit the gym, I get a good sense for my level of fitness.
  • MyFitnessPal (3%): Speaking of “fitness,” I use this app to record my gym visits, weight, and a few other health variables.
  • Safari (3%): I guess I don’t spend much time surfing the web on my phone.
  • Messaging (2%): Huh. I thought this would be higher.
  • Skype (2%): Same.
  • Maps, YouTube, Lock Screen, Calendar (all 1%)
  • Every other app (I counted 86 that day) made up the remaining usage*

*I’m a bit suspicious about Apple Pay and Camera…I use those a lot, but I suppose they don’t use up much time or battery. Or perhaps I don’t use them as much as I think I do.

Let’s put this in simpler terms: I am happy with my iPhone 6, but I routinely use only about a tenth of what it can do.

Can you relate?

As I continued to play with the iPhone X in the store, none of the “new” features made a compelling case to crack the top ten list based on how I actually use my phone. I suspect I would use precisely all the same apps and functions in precisely the same proportions – give or take a few percentage points.

What were the iPhone X’s key selling points?

  • The bigger screen? I’m not sure how that helps my Mail, Phone, or LinkedIn experience. The screen isn’t that much bigger, and the user experience did not change on the larger screen. It was simply bigger.
  • No home button? That makes using of Apple Pay harder for me, not easier. I wear glasses sometimes…and not others. When I tried out the facial recognition, it was wonky. That might improve over time, but I despise learning curves without tangible payoffs.
  • A better camera? I come from a family of advertisers and artists, and I have trouble telling the difference between 8- and 12-megapixel cameras on most smaller screens.

Apparently, it’s not just me struggling with the upgrade decision. Lots of people are. You might be too. A couple of weeks ago, Apple made the first announcement in years that it would miss its earnings targets, largely driven by disappointing sales of the new iPhone models.

You can listen (or read) Jim Cramer’s interview with Tim Cook here. Frankly, there is a lot to like. Media interpretations of Cook’s interview (incorrectly, I believe) claim he “blames China” for disappointing iPhone X sales. Yes, he mentions China, but he doesn’t make excuses for Apple. Put simply, China’s smartphone manufacturers (and their products) are improving, leading to higher competition in domestic Chinese market. However, Apple retains a “status product” brand in China that others have yet to match. Nevertheless, Apple needs to step it up, and Cook knows it. His solution seems laser focused on customer satisfaction – in other words, happiness with Apple products.

In the end, will “happiness” be enough to drive a rebound in iPhone X sales?

Unlike many on Wall Street, I don’t know, and I will not hazard a guess. There are too many random factors at play. However, it does seem to me that “incremental innovation” may no longer be enough to drive the average person to upgrade their phone to the newer model … at least not as quickly as Apple would like. My friend (and market innovation expert) Arik Johnson would call Apple’s strategy here an “overshoot” of the customer’s needs. In other words, the engineers can make cool stuff faster than all of us can realize we want it.

It’s hard to disagree.

At the heart of the issue is the difference between “satisfaction” with the iPhone and “urgency” to upgrade to the newer model. I also think Tim Cook knows as much … and said as much in his interview with Cramer. We simply weren’t listening correctly.

“Happiness” truly is the key word. Here why.

 

What Tim Cook really said: The true value of Apple’s innovation is no longer its iPhone, it’s your data that flows through it.

Let’s examine a few of Cook’s statements for evidence:

[A note regarding the conversational language. I am choosing to leave it exactly as CNBC transcribed it. It sounds better than it reads.]

And so when I read the emails and so forth from customers, they’re tellin’ me how the Apple Watch has changed their life. They’re tellin’ me how it motivated them to be more fit, be more active. They’re tellin’ me that they discovered they had AFib. They’re tellin’ me they– found a problem with their heart that they didn’t know existed. And if they wouldn’t’ve reached out to a doctor, they might’ve died. And so these are life-changing things.

Translation:

Yes, I know Cook is referring to the Apple Watch and not the iPhone, but it’s the same idea. Apple is providing a health outcome, not simply a customer experience outcome. And to do that, Apple needs your personal data. The Apple Watch (or iPhone) is simply a measuring device. Ultimately, it is a commodity. The data is the value. Check that. Your data is the value.

By aggregating health data among millions of people, Apple will learn something about atrial fibrillation that even the Mayo Clinic doesn’t know. But if they don’t get our data, the sensor itself has very limited value.

We’ve got machine learning embedded in our silicon in our phone. You know, this allows us– not only the power efficiency to have an incredible performance in a very small package, but it allows us to manipulate this data on the phone, have the transactions on the phone, as opposed to letting them out in the world. And– you know, this– the whole privacy issue forum– we’ve always been on the right side of privacy. But the market is now moving. And so this is an incredible strength that we’ve built.

Translation:

In fairness, this is a technical issue that most people aren’t going to understand, but here is the simple idea: Because you can do more of the “processing work” inside the phone itself (and not need to send data through the internet to a central server location), it’s easier to protect private data. Any time you need to send data from point A to point B, you increase the risk that someone might intercept it.

The technical innovation allows Apple to make a legitimate claim as the “privacy” company. Cook sees the market shifting. Although people may not “care” about privacy today (or at least, not care enough to do anything about it), when they do, the shift will be quick and massive. This attitude tipping point will create an instant opportunity for Apple to capture market share.

I think Apple is not well understood in some of Wall Street. If you, for example, I think there are several people that believe the most important metric is how many iPhones are sold in a given 90-day period or what the revenue is. This goes – this is far, far, far down my list because the point is, if somebody decides to buy an iPhone a little later, if because of the battery – huge discount that we gave – they decide to hold on a little longer, I’m great with that. I want the customer to be happy. We work for them. And so, but the important thing is that they’re happy. Because if they’re happy, they will eventually replace that product with another. And the services and the ecosystem around that will thrive.

Translation:

Apple is playing the long game. Cook is more concerned about happiness with the product and he knows the upgrade will happen eventually. But more to the point, happiness is simply another word for “engagement”, which is another way of say “giving the iPhone your private data.”

Tim Cook knows your data is the key to Apple’s success.

 

Value proposition 101: What’s more valuable, the chicken (iPhone) or the egg (your private data)?

To find out why this shift in power is such a big deal, let’s unpack the “value proposition” for the iPhone. In non-business-school language, when you give Apple $1000 (or so) for a new iPhone, what do you get as the consumer, and what does Apple get as the seller? (Hint: It’s more than just the $1000.)

Let’s start with the basics:

Here is a summary of what you receive when you buy a new iPhone:

  • Use of all the features and functions built into the iPhone (the phone itself, email, music player, GPS, built-in apps, etc.)
  • Access to an “ecosystem” of apps and developers (aka the iTunes Store) who create new functions for your iPhone according to Apple’s quality control rules
  • A “status” symbol – Apple is especially good at creating a desirable brand, and we shouldn’t discount that as part of the value you get for your money
  • Advice and suggestions based on your use of the phone (e.g. screen time analysis, health nudges, etc.)

Here is a summary of what Apple receives when you buy a new iPhone:

  • Retail price of the phone itself less any discounts
  • A cut of the earnings of any paid apps that you download and use
  • A cut of the earnings from cellular carriers (AT&T, Verizon, Vodaphone, etc.)
  • A cut of the transaction any time you use Apple Pay
  • Revenue from other services such as iCloud backups
  • Aggregated usage data from millions of global users (e.g. financial data, purchase/preference data, location data, social data, voice data, image data, video data, and health data)

I am oversimplifying, of course. The value proposition for you specifically is different than the value proposition for everyone generally. In other words, the iPhone as “status symbol” may not be that important to you … but very important to someone else. But even given those conditions, the relative value of those “value exchanges” leads to some interesting insights.

Let’s dissect that exchange of value with just a little more nuance.

Here is a summary of what you really receive when you buy a new iPhone:

  • Use of all the features and functions built into the iPhone (the phone itself, email, music player, GPS, built-in apps, etc.) – As we saw in my example (and I’ll bet yours too, if you look carefully) is that many people use only a small fraction of the total functions of their iPhone. To be fair, that’s true of many technology products, not only the iPhone.
  • Access to an “ecosystem” of apps and developers (aka the iTunes Store) who create new functions for your iPhone according to Apple’s quality control rules – Google has a strong ecosystem as well in its Android ecosystem, and many would argue that their platform offers even more. Apple counters with its “quality” argument. I’d call that a draw.
  • A “status” symbol – Apple is especially good at creating a desirable brand, and we shouldn’t discount that as part of the value you get for your money – Can most people tell the difference between the new Samsung, Apple and Xiaomi phones? They all have great screens, water resistance, responsive touch, and dual cameras. The differences are real, but they are invisible to most people.
  • Advice and suggestions based on your use of the phone (e.g. screen time analysis, health nudges, etc.) – Of all of the value you receive, this is the one that could be truly unique and differentiated. But think about it for a moment, that’s not dependent on Apple’s device necessarily, but rather on your willingness to share data with it. More on that in a moment.

Here is a summary of what Apple really receives when you buy a new iPhone:

  • Retail price of the phone itself less any discounts ­– Apple is good at maintaining its price leadership position (high), but as new similar competitors hit the market, its share could erode.
  • A cut of the earnings of any paid apps that you download and use – As market share erodes, so does the incentive for the developer community to invest in it.
  • A cut of the earnings from cellular carriers (AT&T, Verizon, Vodaphone, etc.) – See above.
  • A cut of the transaction any time you use Apple Pay – See above.
  • Revenue from other services such as iCloud backups – See above.
  • Aggregated usage data from millions of global users (e.g. financial data, purchase/preference data, location data, social data, voice data, image data, video data, and health data) – This data is far more valuable than you think. This is a treasure trove of insights that can lead to the creation of dozens of new spin-out businesses.

To summarize: Of all the exchanges of value, it is the access to your private data that creates the most potential benefit not only for you, but for Apple as well.

But it only works if we all share our data.

 

It’s time to flip the value proposition: Tech should start buying us.

In the near future, the only thing that will make an iPhone truly different in the eyes of its users will be your data and your willingness to share it. That willingness will be based on the benefits you see, personally, in the form of health recommendations, financial advice, and other usable insights.

The technology is simply a mechanism to gather and synthesize your data. And although Cook was able to list off a few examples, the amount of truly valuable “insights” you receive right now is quite low. Apple needs more of your data to build the next generation of products. Those products will include hardware and software, but only as a means to the end of delivering you a better quality of life.

But if Apple’s market share erodes, it will never make it.

Here’s the radical proposition: Start paying us.

Instead of you paying Apple to use an iPhone, why doesn’t Apple pay you according the level of your willingness to share your data? If you share a little, you get paid a little. If you share a lot, you get paid more. Why not provide a mechanism and incentive for you to provide that data? Apple already has a leading position in privacy, so the arguments against sharing data securely will be real, but they will be muted.

There’s even a precedent for this.

When credit cards became commodities in the 1990s and 2000s, they began aggressively promoting cash-back and rewards programs. Why? To keep people using the card. The advertising image of the “Gold Card” didn’t matter as people did more of their shopping online (and no one could “see” that you had a Gold Card). There are plenty of other examples.

The counter argument to this “reverse consumerism” idea is simple, but it’s flawed.

Some technologists will argue that your individual data is not valuable, but rather it is the analysis of many users’ data that creates the value. While it may be true that analysis leads to value creation, DIY artificial intelligence is beginning to commoditize the process of analyzing data, not just collecting it. In other words, anyone will be able to do the analysis … if they have access to the data.

And that access is not a trivial problem. Take the example of the early versions of pothole reporting apps. The data revealed that potholes occur more often in rich neighborhoods. Or did the data reveal that potholes occur where people had the technology and time to report them? To carry forward our Apple example, the people who can afford an iPhone are not necessarily a representative sample of those with atrial fibrillation, and the resulting analysis may be deeply flawed.

Garbage (data) in. Garbage (analysis) out.

Think about it for a minute: Not everyone has $1000 (or more) to buy a new iPhone X, but everyone has personal data … lots of it … of many different types. This could be a way to form a truly meaningful and human partnership between company and consumer based on the next generation of mutual benefit. Your data leads to new innovations that companies give back to you. Think of what this means for the total available market for Apple’s iPhone. You won’t need $1000. You’ll just need to be a human being.

Up to this point, Apple (and everyone else, by the way) was hoping that incremental “tech” innovation would be enough for you to give your data away for free. And up to this point, it’s worked.

But as privacy concerns mount, and technology commoditizes, people will stop sharing their data because they will no longer see the benefits as worth the risks. And if you stop sharing, the next generation of innovation cannot occur.

Here’s the more important question: What company will seize the opportunity to pick up the ball here and rehumanize its relationship with consumers?

If not Apple, who?

If not now, when?

Tim Cook, what do you say?

 

Post Script:

By the way, walked out of the Apple store with an iPhone 8 – the first model Apple made with wireless Qi charging built in, because my car has a Qi charging pad, and because they discounted the price.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

 

 

 

Categories
Audience Empowerment Information Management Long Form Articles Marketing Ethics Rehumanizing Consumerism

In America, your digital freedoms are what the tech companies say they are.

What do you really know about how organizations protect your private information?

Perhaps you don’t think about it that much. Your data has become such a commonly-traded commodity that most people couldn’t make it through an average day without giving their private information to at least a dozen organizations.

Doubt me?

Let’s examine a simple daily routine. I’ll bet I can count at least 12 times you gave away your private data in return for a product or a service – perhaps many times, without realizing it.

  1. You told your voice-enabled Echo to set an alarm for you to wake up 15 minutes early. You just told Amazon when you’re awake (and ready to receive advertising offers).
  2. Over breakfast, you check your “work” email account. You just told your company’s IT department that you’re on the clock.
  3. You decide to take public transit into work, scanning your transit card when you board the bus. You just told the transit authorities you’re a passenger today.
  4. You use your Starbucks card to buy coffee. You told Starbucks what you ordered, and how that’s the same thing you ordered each day for the past week. Perhaps you’re ready for something different?
  5. Oh, by the way, your Starbucks card is loaded on your Google Pay app. Now Google knows your coffee habit as well.
  6. You scan your work ID badge when you enter your building. Now your boss knows you’re on site…and that you’re a few minutes later than usual.
  7. You use a company credit card for lunch. You told the credit card company (and your employer because it’s a corporate card) that you ordered the fish and chips instead of the salad. (Your health benefits administrator might catch a glimpse of that choice as well.)
  8. You spent 15 minutes on your LinkedIn app scrolling through job postings. LinkedIn knows you’re open for new job opportunities…and if you used the company’s WiFi, so does your boss.
  9. You worked late (which your employer knows, by the way, because of your exit badge scan) and missed your bus. You decide to take an Uber. Now Uber knows where you live and work.
  10. At home, you log into Facebook before dinner and post a photo of you and a bottle of wine. That’s the fourth “wine photo” this week. You’ve just told Facebook’s algorithm that you might have a drinking problem. In the meantime, you’re likely to see more alcohol advertising.
  11. You decide you can’t find anything at home to eat and get in your car. Most modern cars are equipped with GPS tracking. If you happen to get into an accident because you were impaired, the car can notify authorities…and if a judge okay’s it, they might also look at those Facebook “wine” posts.
  12. But let’s assume you’re back home safely and launch Netflix. Now Netflix knows that you spend 2.75 hours per day (on average) watching television.

I could go on, but I think you get the idea. Most people think the only time their “private” data moves around is when they run their credit card. Perhaps they also realize that their smartphone tracks location data. But few people stop to think about the vast and complex digital trail they leave behind every day of their modern lives.

Put more crudely: the story of most people’s digital lives reads like a scandalous tale of unprotected, anonymous sex with as many partners as possible.

 

Your companion on every step of the digital trail

In the (limited) example above, we learned we share of private data with many more organizations than we might have thought. When we share our data, we trust those organizations to use our private information for lawful purposes and deliver what they promised us. Trust is the key word. Let’s ask ourselves some questions:

  • Do I trust Amazon to send me advertising? Probably, yes. That’s what I signed up for when I bought the device, and even if I don’t think about it much, I know that’s part of the deal. But do I also trust Amazon with my sleep schedule?
  • Do I trust my employer with my email habits, arrival/departure times, web browsing history, and credit card expenses? Yes, I suppose I need to. Those are conditions of employment, and they seem reasonable. But do I trust them not to share my dietary choices during lunch with my healthcare insurer?
  • Do I trust Google (and Starbucks) with my financial information? They aren’t banks, although we often treat them like one.
  • Do I trust Facebook (and Toyota) not to share private social media posts with law enforcement? How well do you know what is “legal” where you live?

Those are hard questions with few easy answers.

For one day, I invite you to write down each time you leave a “digital footprint” – as well as the organization(s) you are trusting with that information. If your situation is anything like the hypothetical example above, you might be surprised how many organizations you’re trusting to protect your interests.

Perhaps you cringed if you wrote down “Yahoo” or “Target” or “The Home Depot.” Here’s the other time people tend to think about organizational data practices: After a breach.

How many millions of Yahoo email addresses (and passwords) were stolen? What about Target? Home Depot? Data breaches have become so common that they blend into the background. Unless your personal financial data was stolen and you were the victim of identity theft, data privacy is sort of like life insurance: You don’t want to think about it, and you sure hope you don’t need to use it.

But unless you are one of the few people who work in the “information” industry (IT analysts, server administrators, data scientists, basically all of modern marketing, etc.) you need to admit that you don’t know how organizations handle your data. You may have suspicions – you may even be a bit jaded – but you don’t have hard facts to answer for yourself if those organizations deserve your trust.

That’s about to change.

The era of data privacy ignorance is over, and we have GDPR to thank for it. After I’m done helping you understand the European regulation, and what we’ve learned in the past seven (or so) months, you may not sleep as well.

Or to use continue my crude analogy of data hygiene habits from earlier in the piece, you may start to use “protection.”

 

Now more than ever, it’s important that all of us understand what GDPR really is.

The most important consumer protection milestone since Ralph Nadar’s 1965 auto industry exposé Unsafe At Any Speed came and went without much fanfare on May 25, 2018.

The formal name in the European Union is the General Data Protection Regulation, but it’s most commonly known as GDPR. Yes, it generated a blip of attention across the pond, but as with most things that aren’t born in the United States, Americans didn’t pay much attention. Nor did the rest of the world. Thousands of organizations, including Google, Facebook, Amazon, and Apple, all updated their privacy policies. Most of us simply clicked “accept.”

That was a mistake.

Without diving into the bureaucratic language, GDPR is a set of privacy protections for EU citizens. But it’s much more than that. GDPR is a new set of property rights—rights over the data created by all people as they walk through their digital lives: purchase records, locations they visit, surveillance of them, everything.

Specifically, GDPR guarantees:

  1. the right to access your personal data (companies cannot hide it from you);
  2. the right to own your personal data (you can request it, a processed called “rectification” … and then take it to some other provider);
  3. the right to restrict how your data may be used, and most importantly,
  4. the right to be forgotten (you can ask to be purged from the data gatherer’s records).

GDPR says that you are more than a collection of data.

GDPR is no less than a statement of basic human dignity.

There’s more to it than that, and the more you learn about the specifics, the easier it is to get lost in the technicalities. For our purposes, let’s see how GDPR works in practice.

Suppose you’re interested in a London production of Hamilton, and purchase tickets online from the theater’s website. On the day of the event, you leave your hotel (that you also booked online) and ride an Uber to the theater. Along the way, you are captured on no fewer than three surveillance cameras in the theater complex. You purchase a drink with your credit card, watch the show, and head back to the hotel after a thrilling performance.

If you had done that in New York, as an American citizen, you’ve given no fewer than five organizations (the hotel, Uber, the theater, the concession vendor, and the credit card company) your private information. They can use it, into perpetuity, for whatever purpose they like—usually to remarket other goods and services to you.

(Have you ever escaped one of these mailing lists? I thought not.)

But under GDPR, Londoners have a choice. With one email to each vendor, they can ask to purge all of that data. It would be as if they never attended the show. I’m oversimplifying, of course, especially as it relates to the financial transactions, but let’s pause to think about what a massive change this is. For the first time since the beginning of the internet and the creation of your digital footprint, EU citizens (and to an extent, anyone an EU-based organization touches) have control over a new type of property—their data. Organizations and marketers now must inform them, respect their rights, and up their game if they want the right to use that asset. And because EU citizens cross borders, and because the EU will take action against violators outside its borders, global organizations are forced to comply. In other words, London citizens can ask the New York vendors to purge their data, and those US-based companies will need to oblige them.

(As an aside, I find it ironic that a Brit has more freedom regarding their data than an American going to see a play about a key figure in the American Revolutionary War. But I digress.)

Up to this point, privacy and “data ownership” has been a one-sided battle. Your data freedoms are what data gatherers decide they are. The EU just gave its citizens the data equivalent of the Magna Carta.

 

What does GDPR tell us about how well organizations handle our data?

Until GDPR passed, we didn’t really know how well organizations handled private data, we could only guess. Now that we can get hard data, I think it’s fair to ask ourselves how well have EU (and global) organizations implemented the changes in data practices and transparency at the heart of GDPR?

Here is the simple answer: Not well.

(Fair warning: What follows is about to get wonky. I’ll do my best John Oliver impression to make what follows interesting and relevant to all of us. But I don’t have a team of joke writers and graphic artists. You’ll have to make do.)

Let’s talk first about compliance. One of the primary enforcement vehicles you have (and by “you” I mean EU citizens) is what’s called a “Subject Action Request,” or SAR, for short. Basically, it says that you can request that any organization holding your data return it to you within 30 days after they receive your formal request. That process for making that request must be easy to find on your website and easy to complete.

Because of that formal process, journalists have been able to test the process. Researchers have been able to collect sufficient quantitative data. In other words, we’re not guessing any longer.

According to one study completed by 451 Research:

  • Only 35% of EU-based companies complied to SARs within the 30-day timeline (Here’s a handy tip: when you look at percentages, always read them the opposite way they are stated. You’ll likely learn something interesting. When we do it here, this means a majority of companies, some 65%, did not comply within 30 days.)
  • About 50% of non-EU based companies complied on the same test (Really? I wouldn’t have guessed that. I love it when research surprises me.)
  • Retailers perform the worst; 76% failed the test (Remember our opposite trick? Only one in four retailers takes respecting your privacy seriously enough to comply with the law.)
  • Financial service firms are some of the best; “only” 50% failed (I worked for a bank; those folks are wound tight. But remember, the “best” is still a failure rate equal to a random coin flip.)
  • The National Pharmacy Association (UK) found a huge spike in patient data breaches after GDPR implementation. In fact, one of the largest fines levied against a GDPR violator was the Portuguese hospital Centro Hospitalar Barreiro Montijo (CHBM). In two separate violations, regulators assessed €400,000 in fines. Financial identity theft will be nothing compared with genetic identity theft. I’d think twice (or three, or four times) about sending away for one of those genetic tests.

Their research also found that while these organizations generally understand the impact and need for GDPR, actual compliance rates are a better measure of leadership priorities. In other words, believe what they do, not what they say. From the basic statistics above, it should come as no surprise that most global firms would fail a GDPR audit.

Let’s make the point simpler: When you interact with most organizations through the course of your day, they are demonstrably not committed to your privacy. They are committed to their goals.

 

Hey wait! That’s not fair!

Large organizations are quick to point out that given the amount of data created compared to the number of violations that occur, they are doing quite well handling your data.

It’s a “reasonable” point of view.

Let’s run a simple thought experiment using our hypothetical person as a guide. This person created a sample of 12 “steps” in a digital “footprint” throughout the day. (The actual number could be much higher, but let’s keep the number conservative.) On planet Earth today, there live roughly 7 billion people, about half of which lead “digital” lives. Let’s use another conservative number – 3 billion digitally-connected people – and multiply that by the 12 data points in each person’s digital footprint. That’s 36 billion data points per day, or over 13 trillion data points in a given year. That’s not the real number, of course (the real one is much higher), but it illustrates the scale of the data management challenge.

If you consider the number of “mistakes” (breaches, mishandling of data, improper access, etc.) divided against the total number of data points, the proportion of privacy violations is vanishingly small. More than that, they argue that given enough time, organizations will adjust to the new reality of GDPR (at least in the EU), and these incidents will become even less common. C’mon. It’s only been seven months. They’ll get better, right?

I’m suspicious for three reasons.

  1. First, it’s not as if GDPR emerged from nowhere. Global organizations had months to prepare for the law’s passage. Since May 2018, they have had more than six months to make adjustments.
  2. Second, the breaches reported are only the breaches we see, not all the breaches there are. Ask any security expert, and they will tell you that the average consumer doesn’t see most of what happens. That’s by design (it’s embarrassing) and by fatigue (if they told you everything in technical detail, you’d stop listening).
  1. Third, large organizational data “scientists” misunderstand the perception of risks involved. To them, an error rate of 0.0001% is so small as to be insignificant. They call people who worry about breaches “foolish” and “irrational,” rolling their eyes at the tiny chance something might happen as a result of a breach. I would argue there is nothing irrational about fearing an outcome that may be unlikely, but would be catastrophic if it were to occur. Identity theft (and genetic theft) both fall into that category. (For more, I would encourage those “scientists” to reread The Black Swan and anything by Kahneman and Tversky.)

People worried about privacy breaches are not irrational, but we are being taken for fools.

 

How to not be a taken for a fool (anymore).

If you are a modern individual, taking advantage of the bounty of technological wonders that make your life easier, your privacy is an illusion. All of your data is available. You gave it away (in most cases, for free). You are relying on the good intentions of these organizations not to take advantage of you. You’re also relying on those same organizations to protect that data from others with lesser intentions. They are clearly failing. We are clearly fools.

If the results of GDPR audits are any indication, you may not have much time to make changes in your “data hygiene” before you begin to experience negative consequences of a hack or other intrusion. Every time you engage in digital behavior, you’re rolling the dice. Snake eyes might be rare, but they happen. But it’s not realistic for most of us to go “off the grid” and completely sever our ties to the digital world.

We need a realistic answer, and we have one: Decentralization.

The saving grace (for those of us outside China and a few other countries) is that no one organization has more than a sliver of your data. Amazon may have some purchase history, but not all of it. Apple may have information about your app use. Netflix understands your television habits. Your health clinic has some biological data. Google knows where you’ve (physically) been. Toyota knows how you drive. You can’t hide your “adult movie” habit from Firefox.

Many of these organizations wish you would centralize more of your activities. They receive a “greater share of wallet” from each consumer. You (presumably) receive greater incentives and benefits. It’s like the practice of insurance bundling on steroids. But I think you now can see the risks of having all your digital eggs in one basket.

The privacy of any one aspect of your life might be a myth, but only you know the entire picture. Let’s explore some practical steps you can take to keep it that way:

  • Take steps to keep your digital life compartmentalized. If you use an Apple phone, use a Google web browser. Don’t store your health records on your Android phone. Don’t share browser data between devices.
  • Don’t use single login services (such as “login with Facebook”). Yes, it’s easier. And yes, you created a backdoor for Facebook … as well as anyone who hacks your account.
  • Take extreme care before sending away for a genetic test from anyone other than a large, established, medical institution. And if you do, pick one that is not your primary clinic.
  • Learn how to turn off location services, facial recognition, and listening services (Alexa, Siri, Cortana, etc.) when they are not in use.
  • Split your financial life into more than one institution. For example, don’t use a credit card from the same bank the holds your checking account.
  • If you live in the European Union, learn how to file a GDPR request. Here’s a link with some tips.

It seems to me organizations are in a precarious position. If they come clean with their data management practices (and show their warts), they risk a negative perception in the marketplace versus those organizations who choose to be less transparent. But those who choose to be opaque risk catastrophic breaches of trust when the inevitable occurs. It’s a lose-lose.

That’s why I am tempted to advocate for a wider adoption of GDPR-style legislation, worldwide, to level the playing field. In lieu of that, I think there is a market opportunity for white hat hackers to expose privacy violations and issue “trust ratings” alongside “consumer ratings” on every website. (Will organizations pay for that? If they’re doing well, yeah, probably.)

Until that day comes, it may seem like these efforts are an extreme form of paranoia, but for anyone who has suffered identity theft, they are sensible and reasonable. Think of decentralization the same way submarine designers think about sealable bulkheads. If one compartment springs a leak, it doesn’t sink the entire ship.

But more to the point, because you are the only one who holds all the cards, you have power. No “one” can be trusted with your all of your data, but perhaps “every” one can be trusted with just a little of your data – at least until we have better safeguards.

###

A special note: Lorenza Maria Villa, an Italy-based GDPR Consultant & Data Protection Officer, was kind enough to review a draft of this article and provide feedback. I am in her debt. Grazie!

###

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

##

Source notes for this article:

IT Pro (UK)

I’ve embedded most of the links in the article itself, but I found myself continually referring to this UK site for a comprehensive run-down of GDPR news. If you’re an IT professional, I’d keep a close eye on their aggregation. They provide helpful links to the original reporting as well as concise summaries of the implications.

Let me put it a different way: Because the “carrots” aren’t working, the EU is bringing out the data privacy “sticks.” That means violators are getting fined. Don’t think you’ll get found out? Well, tell that to the lawyers teaming up with artificial intelligence software to develop automated scanners of privacy policies on your website. I would bet money the nastygrams are on their way.

If you’re a consumer, IT Pro will give you a sense for what’s going on in non-technical language. Fair warning: You may not like it.

Categories
Audience Empowerment Information Management Long Form Articles

“Alexa, play some music” isn’t the only time Amazon is listening to you.

Amazon’s voice recognition software only listens when you say the word “Alexa,” right?

That’s what most Echo and Dot buyers think because that’s what the advertising leads you to believe. As if by magic, your Alexa-enabled device “wakes up” when you say its name. But think about that for a moment. After you say the magic word, your Alexa-enabled device must listen for your request, interpret it, and respond. Just how much does Amazon really listen to inside your home? How much you really know about how voice technology works when you unboxed your Alexa-enabled device?

(Fair warning: this is about to get awkward.)

You may have assumed your Echo or Dot listened and responded using the small computer housed inside the device itself. But that doesn’t make sense. The on-board computer simply isn’t powerful enough. And besides, Amazon continues to update the device. It must do this from a centralized server location. That’s the only place where there is enough computing power not only to interpret your request, but also to update Alexa with new “skills” from third-party vendors. That’s how your device now knows how to order a pizza. Amazon needed to partner with Domino’s Pizza (in the United States) to develop that interface.

Now that you know that your voice recordings are being sent via the internet to a centralized location, you may have assumed Amazon will need to store that data for some period of time – for example, to use its Natural Language Processing algorithms to interpret your request for a weather report (or to buy a pizza), gather that information, and then send it back to your device for it to speak the response. The transaction happens so quickly that you assume Amazon would have no reason to keep the recording of your voice any longer than a few seconds. Besides, is that even feasible? Think of how much storage space Amazon would require for all of the audio files. Is there really a database somewhere storing all your “requests for weather reports?”

Those are good questions.

Imagine for a moment that you were curious about what, precisely, your Amazon Echo or Dot device recorded in your home. Now that you know it’s listening, you’d like to know what it heard. To satisfy that curiosity and put your mind at ease, you ask Amazon to send you a copy of the data your device has collected since you bought it.

After a few weeks, you receive your audio files from Amazon. Imagine your horror as you open the attachments and begin listening to the recordings: A discussion of what to have for dinner, two children arguing over a toy, a woman talking to her partner as she gets into the shower. You weren’t really sure if Amazon would keep recordings at all. And if they did keep recordings, you thought your Echo or Dot recorded only your explicit requests.

But it gets worse. You don’t recognize any of the voices. With equal parts relief and horror, you realize you are listening to someone else’s Echo recordings!

 

As it turns out, all of your assumptions about voice technology were wrong.

This story isn’t a thought experiment. It is precisely what happened when a German citizen who requested his data files from Amazon under the European Union’s GDPR regulation. He expected to get a list of the products he has purchased, how he paid, and other commercial profile data Amazon compiled. Unlike my scenario, he wasn’t expecting audio recordings. He didn’t own an Alexa-enabled device. He shouldn’t have been getting any recordings, yet there they were.

According to the story originally reported by the German investigative magazine c’t, Amazon admitted the mistake, citing human error in sending him the wrong file.

(The statement fails to mention if the company notified the person whose data was shared. Also, Amazon was only compelled to comply with the request for data because the requestor was a European Union citizen. If you’re an American, or from anywhere outside the EU, good luck.)

In case any of the impact of the story escaped your notice, let’s take a moment to summarize what this all means in simple terms, shall we?

  1. Your Alexa-enabled device listens to you more than you think it does.
  2. Your Alexa-enabled device not only listens to you, but it is also records those sounds.
  3. Your Alexa-enabled device sends those recordings to an Amazon data center, where they not only use natural language processing algorithms to decode your speech and complete your request, but they also store those files in a centralized database for future use.
  4. At that data center, Amazon ­– one of the best data management companies on the planet ­– has a human process to respond to your data request.
  5. As the investigative reporting shows, this human process is prone to error.

To put it in even simpler terms, if you own an Amazon Alexa-enabled device, Jeff Bezos could be the least creepy person listening to you right now.

Are you okay trading your privacy in your home for a weather report?

Or asked a different way: Is that weather report worth someone at Amazon listening to:

  • an argument with your spouse?
  • your kids playing?
  • a “tough” visit to the bathroom?
  • you and your partner having sex?

Are you okay with a random person (who received your data file by mistake) listening to that? Are you okay with a hacker listening to that? Your health insurance company? The police?

I used to believe this was a “boogieman” issue – that worst-case scenarios like the one described didn’t really happen. I used to believe people who rang the warning bell were at best, premature fools, and at worst, fear-mongering opportunists. I used to believe those things, but I was wrong.

The European Union’s 2018 GDPR consumer protection law cast a light under the bed and showed us all that the boogieman is real. And he’s listening to you right now.

 

The tyranny of menus and why is “voice” such a big deal.

To understand why companies are investing so much in voice recognition technology, and why they risk invading your privacy, you have to understand how objectively poor today’s “digital” experience is and how it got that way.

Voice is the natural way humans interact with others and their environment. But in the early days of the internet, interactive voice technology was neither advanced enough nor cheap enough to use outside of a few advanced laboratories. The most cost-effective voice technologies of the day were “telephone menu tree” systems that infuriated even the most patient callers.

If a “natural” interface wasn’t ready for the birth of the internet, what was the next best alternative?

Cascading menus.

Borrowed from library science, the menu structure is a software engineer’s dream. It’s logical, orderly, and hierarchical. Unfortunately, menus are not how people naturally interact with information. Menus do not mimic how our brains work. Menus are not easy to use.

Menus are terrible user interfaces for most everyday functions.

As just one example, think about this simple use case: I would like to play Prince’s “1999” on my iPhone. Here are the menu-driven steps I can take:

  1. Unlock the home screen (if I have not authorized biometrics, I need to input a passcode).
  2. Tap the iTunes app to open it.
  3. Tap the “Artists” list.
  4. Scroll to “Price” and tap the artist name.
  5. Scroll to “1999” and tap the song name.
  6. Adjust the volume as needed.

Six steps. Multiple taps and scrolls. Complex, artificial, robotic.

Or, consider this voice-based alternative:

“Siri, play Prince’s 1999.”

Four words. One voice command step. Simple, natural, intuitive.

Menus are so common, we almost forget how unnatural they are. Menus don’t only dominate the user interface of smartphones, computers, tablets, and websites, but we find them everywhere – kiosks, airport terminals, medical devices, automobiles, and home appliances.

Think about it: That infuriating menu in your Toyota Camry, your CPAP machine, or your GE refrigerator is an ugly holdover from the early days of GopherNet and ARPANET … just like the QWERTY keyboard is an ugly holdover from the early days of IBM typewriters.

That’s why voice is such a big deal.

Menu interactions may be behavioral (and in many ways superior to opinion-based evidence), but they are still untethered to our true thought processes. Voice interactions are different – and not the type of robotic voice commands you give your car; those are simply audio menus, and they are terrible – no, the true potential of voice is unlocked with Natural Language Processing algorithms that learn to interpret and respond to natural human speech patterns. The best of them are learning our cadence, pitch, tone, accent, and volume ­– and most importantly, our intent.

In a menu-driven world, our devices aren’t listening to us, they are waiting for an input. However, when a device is listening, it doesn’t need to wait to respond. It can make suggestions to you in real time, just as another person would do in a conversation. That’s the quantum leap voice technology promises: For the first time in human history, machines can truly interact with us.

But as we’ve seen, that’s not how people think voice technology works. Because we are so used to machines waiting for our commands, we’re not conscious that many of them are now listening to us go about our daily lives.

 

I’m not sure “voice” can be trusted. Yet.

Contrary to the image created by advertising of a fully conversational human-computer interface (a la the Star Trek “computer” or “J.A.R.V.I.S.” from Marvel’s Iron Man), if you try to hold a “conversation” today with Alexa, Cortana, Siri, or Google, you will be disappointed.

Most people who use voice technology quickly learn its limitations and adjust their expectations. In fact, most people use Alexa-enabled devices to tell give them weather reports or to play on-demand music. That’s it.

But if voice technology is to improve, its developers need to listen to and analyze many more interactions. Their argument for listening is simple: As consumers get better at interacting with voice technology, the technology will learn and improve. As the technology improves, consumers will expand their use of it. It’s a positive feedback loop that will (eventually) give birth to a real “J.A.R.V.I.S.” And when that happens, you’ll love it.

Perhaps. But until that day comes, you’re giving up your privacy for a weather report.

At this point, it’s fair to argue that we’ve given up our “privacy” for all manner of technological benefits and services. True, but up to this point those technologies operated on your explicit command. No one forces you to use Google Maps. No one forces you to share personal details on Facebook. No one forces you to buy from Amazon.

But voice is different.

Voice is a form of biometric data – something that is uniquely yours. Additionally, voice technology invades your privacy in an insidious way, always listening, always recording, and always learning more. You can see why organizations want voice analysis so desperately. It’s finally able to break into your “inner self” versus relying on your opinions or waiting for your command.

Voice technology is the ultimate behavioral study that you didn’t realize was happening.

Here’s the bottom line: Until organizations demonstrate they can be trusted with our private data, I’m not sure they deserve to have us give it to them for free. What’s more, they are unlikely to stop collecting your data on their own. As we’ve discussed, they need that data to improve their voice technology, and you’re willingly giving it to them. Why would they stop? They simply hope you aren’t paying attention.

It’s time that changed.

Here are a few easy things you can do today to start you on the path to reasserting the privacy in your own home:

  • Think hard about whether a voice-enabled device is right for you. That includes products from Amazon, Google, Apple, Microsoft, and others. Honestly, I don’t care if you choose to one or not. Just don’t think it’s not listening to you pee. It is.
  • If you do choose to use a voice-enabled device in your home, understand that your home conversations are no longer private. Consider that every statement you make inside the comfort of your home could have the potential to end up in the hands of advertisers, your government, the police, or on Google.
  • Think twice about connecting your voice-enabled device to home automation and security systems. “Smart home” technology is a known source for hacks and privacy intrusions.
  • Search out and read privacy statements before you purchase a voice-enabled device. I’m not saying, “don’t buy it,” I am simply saying, “know what you’re buying.”
  • If you happen to live in the European Union, learn how to request your voice data file. It’s easy. Here’s how.
  • If you are in the United States, send a message to your representative and ask for their stand on privacy issues. That’s easy too. Here’s how.
  • I could go on how many other countries. You get the idea. The notable exception is China. They think about privacy differently.
  • Last, but not least, learn how to turn off listening when you don’t want to be heard.

Sorry, tech companies will not protect your privacy out of the goodness of their heart. It is up to you, as the consumer, to take action.

Your voice is yours. Keep it that way.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

 

Categories
Information Management Long Form Articles Rehumanizing Consumerism

A Fun Parable About Leprechauns and Information Manipulation

Ask most people what comes to mind when they hear the words “information manipulation” and you’ll likely get only one response: Censorship. While certainly a form of information manipulation, it is hardly the only one. It’s not even the most effective technique. Censorship’s two cousins—information friction and information flooding—are much more common and vastly more effective. In this article, we’ll travel to China to learn how both information friction and information flooding help the government manage its sprawling bureaucracy. Then we’ll hop a plane back to the United States to see how both techniques are at work in our culture as well. Finally, we will examine the information professionals’ responsibility to recognize information friction and information flooding at work against (or in) their organizations.

##

Information manipulation is a provocative topic. It stirs strong emotions—closing our minds to the underlying methods before we have a chance to discover how it works. That’s unfortunate. Unless we understand information manipulation, we cannot address it. To help explore the issues at play without triggering our natural defense mechanisms, I’ll start with Linda Shute’s version of the story of Clever Tom and the Leprechaun (Scholastic, 1988).

Once upon a time…

…Clever Tom found himself walking in the meadow by his home in rural Ireland when he came across a leprechaun propped up against a fencepost fast asleep. Tom couldn’t believe his eyes! His grandparents had told him stories about the fairies, but he assumed they were just fairy tales, not actual fairies. But this was one in the flesh—an honest to goodness leprechaun!

He knew what that meant. If he could capture the leprechaun, the fairy creature would be obligated to lead him to a buried treasure. For a poor farm boy, this was the chance of a lifetime. Tom seized the opportunity…and the leprechaun. (The leprechaun was sleeping after all. It wasn’t that hard.)

Startled awake, the leprechaun immediately understood his mistake. Sighing, he agreed to lead Tom deep into the forest to the tree, under which, a treasure was buried. Tom was overjoyed. This is what he had always waited for! Tom could finally leave the farm and find adventure in the big city! But in his haste, Tom forgot a shovel and a wheelbarrow. There was no way he could dig up the treasure. Even if he did, there was no way to transport it back to his home.

Tom racked his brain; there had to be an answer. And then, he had it! From his pocket, Tom extracted a bright red ribbon. Tying it around the base of the tree, he knew it would guide him back to this exact spot. Before he released the leprechaun, however, Clever Tom showed why he earned his nickname: he extracted a promise from the fairy (who, being a fairy, could not tell a lie) that the leprechaun would not remove the ribbon from the tree. Satisfied with the positive response, Tom released the leprechaun and raced home to gather his supplies.

When Tom returned, his heart sank. No, the leprechaun had not removed the ribbon. He promised he wouldn’t, after all. But he did tie an identical ribbon on every other tree for miles in every direction. Clever Tom wasn’t the clever one after all.

 

Three Forms of Information Manipulation

This story has several morals, but let’s reimagine those lessons for our purposes. Clever Tom and the Leprechaun is a story about information manipulation in its three forms.

Did the leprechaun prevent people from telling their stories? No. They did not censor the information. Although church officials at the time of the original tale in the 19th century often discouraged these types of tales, the stories nonetheless got out.

Did the leprechaun make the buried treasure difficult to find? Yes! You needed to satisfy a certain set of conditions—and the first was capturing a crafty and quick leprechaun—to learn this information. In this case, Clever Tom lucked out when he found the leprechaun sleeping. This is information friction—deliberately making facts hard to find.

How did the leprechaun prevent Tom from collecting the treasure? He did not remove it. In fact, he hid it in plain sight…among thousands of other ribbons. That’s information flooding—hiding critical facts in an ocean of irrelevant ones.

As it turns out, the leprechaun might have a new career as an official in the Chinese government.

 

The People’s Republic Of China

When most people in Western countries think about the “Chinese” internet, they’ve probably heard of products and services strikingly similar to their U.S. counterparts: Alibaba (Amazon), Xiaomi (Apple or Samsung), or Sina Weibo (Twitter). There are critical differences, of course. Chinese counterparts filling the same market niches serve a far larger group of people. China has four times the number of citizens as the United States. More pointedly, those products and services operate under the aegis of the Chinese government, submitting to its guidelines regarding information monitoring and censorship.

Those who know more about the Chinese internet (often those who have traveled or worked in mainland China) criticize the government for its “crackdowns” on “dissidents” and their rampant censorship of any information unfavorable to the communist party. While there is evidence of these actions, their information is limited in scope.

Does anyone in the United States truly know what happens inside the so-called Great Firewall?

It turns out, someone does. Gary King, Weatherhead University Professor at Harvard University, and his team at the Institute for Quantitative Social Science, are a prolific bunch, focusing their considerable research talent on discovering exactly the answer to that question (gking.harvard.edu). King’s team began with the assumption that the Chinese government copies American internet and technology companies, and then controls (via censorship) their activities to keep a watchful and constant eye on each citizen.

What they discovered casts considerable doubt on our assumptions. Even how they learned it was ingenious. King’s team tracked information posted to popular Chinese social media sites and then watched what happened. It may be a small amount of time before a computer or human censor could act on a piece of content, but it was measurable. If they could reverse engineer the censorship priorities, they could better understand the government’s purpose in manipulating information.

At the risk of vast oversimplification of a sophisticated approach, here are their conclusions:

  1. Censorship is real, but it’s limited. Yes, some types of content were routinely censored. That content included posts critical of the censors themselves, certain hot-button issues, and “adult” content (yes, exactly what you’re thinking). What surprised them was what was not censored. Criticism of the government itself routinely was left alone. As was most commentary on social issues, and even foreign news. That was surprising. If censorship was not the go-to method, what was it?
  2. Information friction played a larger role. Remember, information friction refers to the process of making access to data just a little bit more difficult. King’s team found that less-desirable information proved slower to access (Westerners will understand this well: virtual private network—or VPN—services often are quite slow). Internet users value speed over most everything else; they will choose the faster source over the slower one most of the time.
  3. As did information flooding. King’s team also found evidence of the so-called 50-cent army, named for the small amount of money they make for each pro-government post they make on social media. These posts crowd out other content, forcing all other information off the scrolling, timeline-oriented social media feeds we’re all used to. In other words, people could scroll through hundreds of posts to find the one they want, and may do so on occasion, but will not do so consistently. In this way, friction and flooding work together to drown out content the Chinese government deems undesirable…and conversely, promote content it wants people to know.

From this study, King’s team could determine the priorities of the Chinese government about its information-gathering and management machine. In a country of nearly 1.4 billion people, there is no way to proactively monitor all government officials and activities in its vast bureaucracy. It needs information, and social media posts are an excellent way to get it. Some critics counter that the idea of “Big Brother” (an American, not Chinese idea, by the way) encourages self-censorship. But this defeats the purpose. If people can’t talk, the government won’t know. Hence, outright censorship is rarer than we might think. If the government doesn’t like something, friction and flooding are far more effective ways to manage the situation.

However, there is one thing that will trip the censors: Collective action. King’s team discovered that you can complain all you like—in fact, that’s encouraged—but if you want to organize your friends to act for change, you are likely to be censored in some creative ways. Yes, your post might be removed, but it is more likely to be dead-ended. In other words, you may be able to publish your post…but your friends may never see it. You get to say what you like and “get it off your chest”, but not make changes. That’s the government’s job. Not yours.

Clearly, the Chinese government has a different set of priorities than U.S. or Western governments, but are they really that different? Do information friction and flooding work (or work differently) in the West as well?

 

The United States of America

The United States does censor information. The government can classify certain types of information for security purposes, but those instances are comparatively rare. However, the government does indeed make certain information harder to get (friction) and bury information in a sea of less salient data (flooding). We can see that at work at all levels of government, from local officials requiring citizens to visit their government office during business hours to request information in person, to the highest officials sending myriad news releases (or dozens of late-night Tweets) to obscure important new facts.

So yes, at a certain level, information friction and flooding are part of the Western government toolbox. However, unlike China, the U.S. government faces pushback from both ordinary citizens and organized groups (e.g. the American Civil Liberties Union) who push for open records laws and easier access to information. Many information professionals have submitted a FOIA (Freedom of Information Act) request and are familiar with the process.

If governmental data were all that was in discussion, we could end here. It is not. Unlike China, information friction and flooding are common techniques of Western organizations. We rarely recognize them as such, and therefore fail to recognize and mitigate their impact. Let’s dissect common techniques to illustrate the impact of information friction and flooding in the United States.

  1. Friction: Catch and Kill. This is a common technique used routinely by tabloid news organizations. When a powerful/wealthy person or organization wants the details of a story “buried,” they may approach a tabloid organization. The tabloid will then approach key subjects with knowledge of the story, offering them payment for exclusive publishing rights. Once the contract is signed (always including a strict non-disclosure clause), the tabloid will exercise its right not to publish the story. Yes, other persons or organizations might have supplemental details to the story, but the tabloids are smart. They “lock up” (or “catch and kill”) the critical sources of information, thereby making stories of embarrassment or wrongdoing much more difficult to investigate.

Other examples of information friction include:

  • Demanding a formal request submission for “free” information,
  • burying detailed webpages in confusing menu structure,
  • using “nofollow” code to stymie search engines,
  • limiting access to information in native languages,
  • and saving text documents as images to prevent easy machine-readability.
  1. Flooding: Ratings Reductions. Celebrities, restaurants, and other service professionals are often the victim of organized groups of people conspiring to “down rate” their product or service on popular social media ratings sites (Amazon, Yelp, Netflix, Uber, eBay, etc.) using a “flood” of negative/one-star reviews. There is nothing explicitly illegal here, although these services try hard to make this technique difficult to execute. However, determined groups often easily circumvent these protections.

Other examples of information flooding include:

  • Releasing large amounts of data at one time (often during a weekend or over a holiday),
  • presenting all pieces of information as equally valuable and of equal weight,
  • following the letter of the law on mandatory disclosures and releasing thousands of pages of poorly formatted documents (also an example of friction…in fact, the two often work well together.)

 

What You Can Do About Information Friction and Information Flooding 

I wrote the original version of this article for an online publication specifically targeting so-called “Information Professionals.” They include legal librarians, academics, data scientists, and research journalists.

Frankly, I was surprised by how surprised they were regarding the sophistication of information manipulation. If the professionals are confused, what hope does the average consumer of information have to sort out what’s happening?

Paradoxically, I think it is easier for consumers to find and counter information manipulation that it is for professionals working inside organizations. Think about it: are you going to risk losing your job by calling out bad behavior? Yes, whistleblowers exist, but the average worker has a mortgage to pay and health insurance to keep (this is a big deal in the United States, foreign readers).

Here are a few ways to know when you could be a victim of information friction:

  • Are you being asked to submit a formal request for information that should be publicly available by law or statute?
  • Most websites are easy and intuitive to navigate … but when you get to the “disclosures” section, does the navigation turn into a labyrinth of dead links and confusing language?
  • Does your search engine find zero results?
  • Does the information exist, but only behind a login or paywall?
  • Is information available only in one language, when the audience clearly speaks multiple languages and lives in multiple countries?
  • Is the information available, but saved as an un-tagged “picture file” (e.g. a PNG or JPG) to make it difficult for auto-translation or text-recognition tools to work?

None of these techniques are necessarily underhanded. There could be good (and legal) reasons for putting up roadblocks to finding information. Just know that when you see them, be careful. They are ways that organizations can claim to be providing you information, but also making it difficult for you to get it. They know that most people won’t try. They can have their cake and eat it too.

Perhaps even more common than information friction is it doppelganger: Here are a few ways to know when you could be a victim of information flooding:

  • Do you need to wade through hundreds (or thousands) or pieces of information to find what you’re looking for?
  • Is information released over a weekend or holiday?
  • Is critical information buried in the middle of a larger data set, not at the front? (In other words, not in journalist “invested pyramid” style?)
  • Does your information come in the form of a flood of late-night tweets or Facebook posts?

Again, organizations could argue that it is not their job to be journalists, nor is it their responsibility to cull out the most important information – potentially embarrassing themselves in the process.

If you see these techniques at work, you may or may not be manipulated. But I think it’s better to understand them, recognize them, and question them.

Caveat emptor.

 

Not A New Story

If all of this seems frustrating, take heart. We’ve been struggling with friction and flooding for a long time. Linda Shute retold an earlier story, The Field of the Boliauns, originally written as part of an anthology of Celtic fairy tales by Joseph Jacobs in 1892. (In the original story, the leprechaun hadn’t fallen asleep, he had passed out. Stories are always true to the morals of their times, and the late 19th century was the heyday of the temperance movement.) He based his work on earlier oral tradition dating back to medieval Ireland and England. Those tales made it across the English Channel by way of Roman Legionaries recounting stories of Julius Caesar and his contemporaries in the Roman Senate in the first century before the common era.

In other words, information friction and information flooding are nothing new. Recognizing and mitigating their impacts has been a game of cat and mouse we’ve been playing for the better part of two millennia. That’s not to say we should give up the struggle, but rather that we’re in good historical company.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

 

##

Note: A version of this article was originally published on Online Searcher in their September/October 2018 edition.

Categories
Long Form Articles Rehumanizing Consumerism

Favorite > Best

Caption: Vintage Dinosaurs and Cavemen Playset, Circa 1981, on eBay

##

The 1981 Dinosaurs and Cavemen Prehistoric Action Playset, by DFC Toys, was my favorite Christmas present of all time. As a six-year-old boy, it had everything I wanted: 100 individual pieces, a diorama plastic map, three movable volcanos, two dozen angry-looking cavemen in varying aggressive poses, and all manner of sizes and colors of dinosaurs. It was also expensive. (It cost $50 if memory serves; my mother told me there was no way I was getting it.) My parents had me completely fooled when I unwrapped it on Christmas morning. I’ll never forget it.

Here’s the thing: The Dinosaurs and Cavemen Prehistoric Action Playset was objectively awful. I’m no scientist, but a few things stand out in retrospect:

  1. The most obvious: Dinosaurs and cavemen did not co-exist.
  2. Presumably, if they did, there also should be cave “women.”
  3. Neither dinosaurs nor cavemen would appreciate living in the shadow of currently-erupting volcanos. That’d get toasty. But perhaps that explains why none of the cavemen wore shirts.
  4. And I’m not sure boiling lava is green. Again, I’m no scientist, but I’ve watched the Discovery Channel. I think the product designers knew lava was red in 1981.

I could go on, but none of that mattered. Just like the little kid in “A Christmas Story” and his “official Red Ryder, carbine action, 200-shot, range model air rifle, with a compass in the stock and this thing that tells time” (aka, a clock), I was overjoyed. And, I will proudly say, I never came close to poking my eye out.

I’ll bet many of you have a similar experience: A favorite holiday or birthday gift that transcends its objective quality and touches some deeper memory. I can’t help but smile when I think about it almost 40 years later. It wasn’t the “best” physical gift I ever received – not even close. But it’s still my favorite.

If you stop reading here, just take a moment and remember that gift, whatever it was. Linger on it for a minute. I think memories are the best Christmas gifts of all, and you should give yourself one right now. I’ll wait.

##

I’ve been thinking a lot this year about the difference between “best” and “favorite” – and what it means for organizational leaders (especially of smaller organizations and startups) trying to compete in an increasing haves and have-nots sort of business world.

Perhaps different ways to ask that same question:

  • As a small, independent coffee shop, how do you compete with Starbucks?
  • As a boutique clothier, how do you compete with Levi’s?
  • As a mid-tier US-based electronics manufacturer, how do you compete with Foxconn?
  • As a taxi driver, how do you compete with Uber?
  • As just about anyone, how do you compete with Amazon?

Each of the organizations I mentioned is, from an operational excellence perspective, the “best.” Put another way, is your corner coffee shop really better than Starbucks? Does it produce a more consistent cup of coffee? Does it offer more variety? Does it offer better Wi-Fi? Are the restrooms cleaner and better-stocked? Think objectively for a moment. It’s not. Likely, it’s not even close. The reason is clear: Resources. The amount of capital major organizations can pump into Operational Excellence (OpEx) initiatives will always dwarf what its smaller competitors can invest.

But what about “traditional” differentiation? Choosing a niche customer base, or delivering a unique offering? Isn’t that the way your cute little neighborhood coffee shop can separate itself?

If you stumble on something profitable, the likelihood is high that Starbucks (or Amazon, or another large organization) will decide to put you out of business. I used to think there were limits. Now, I’m not so sure. I am reminded of a story I read last week in my local paper: Amazon will sell you (and deliver within a couple of hours) a fresh-cut Christmas tree to your front door. As the Boy Scout tree farm, how do you compete with that? Is the average person really going to drive out to some poorly lit abandoned parking lot, only to overpay for a dry tree cut two weeks ago, while scratching up the top of her $40,000 SUV on the way home? This isn’t the Griswold’s Christmas. The world is moving on. Even hyper-local business models are being disrupted by hyper-large organizations with the capital to exploit every feasible profitable niche, no matter how small.

Small, niche markets no longer offer the protection they once did to small organizations. Software allows their larger competitors to make every market profitable.

If that’s true, what are your options?

##

I had conversations in the past few weeks with smaller organizations who struggle with this dilemma. In one case, a small regional retailer managed to score objectively “the best” in every possible category – selection, service, delivery, add-ons – everything! It hasn’t been enough to sustain growth in the face of stiffening competition. In another case, a smaller manufacturer considered investing nearly a half-million dollars in a new integrated ERP and e-commerce portal to attract one-off business that is leaking to Amazon. They were ready to pull the trigger until they spoke with others in their professional network who made similar investments. Those colleagues saw no measurable return on their investment at all within three years.

How can that be?

You need to consider the context. That small manufacturer’s (much) larger regional competitor hired 100 people for the same type of OpEx project. And that competitor did it to try to compete with Amazon’s incursions into their previously-niche and highly-technical B2B market. $500,000 may seem like a lot of money (and it is to a small business), but it equals the salary of merely five of the 100 people at the larger company. It’s a rounding error to Amazon.

Put simply, the OpEx gap between the haves and have-nots in the economy has grown so vast that it is virtually impossible for many small organizations to have a reasonable chance to catch up. Many of those businesses must content themselves with scratching out a living on the margins (plenty of independent consultants do that), or by positioning themselves for acquisition from a larger firm (essentially, refusing to play the game).

But for those who want to remain independent, why do they expend so much effort trying to compete with Fortune 500 organizations on OpEx initiatives? Why do they still try to “be the best?” A few easy reasons come to mind:

  1. In some industries – especially heavily-regulated ones – “best practices” are defined by regulatory bodies. Those practices would seem to level the playing field, but I can tell you from personal experience that the business impact of those regulations is quite different when you have a team of lawyers and regulatory experts on staff. In these cases, “best” is simply table stakes. As only one example of many, you’re not in business as a medical device manufacturer without it. Striving for “best” seems like a differentiator in a regulated market, but it is not.
  2. The cost of operational excellence technology drops consistently, making it seem attainable to the smaller organization with fewer resources. As an example, when I began my professional career, a complex e-commerce website might cost $500,000 to build and deploy. Ten years later, you could build the same website for $50,000. Today: $5,000. It’s tantalizing to think of your own $10 million organization with the advantages of using the same operational software as your $10 billion competitor. Here’s the problem: It’s not the same technology. To continue my example, the best websites (the ones that create today’s “one-click” customer expectation) still cost $500,000 (or much more) to create. It’s yesterday’s technology that’s cheap; technology that seems dated the instant you go live. I use websites only as an example because they are what customers see. Most OpEx investments are delusions that often fail to generate their expected return on investment.
  3. Despite all those rational reasons, I think the biggest driver of continued OpEx investment is simple psychology. OpEx is easy to understand, easy to measure, and easy to operationalize. Yes, OpEx is difficult, but you can see all of the moving parts. You simply need to execute. It feels good to accomplish an objective like that. I know. I’ve done plenty of them. Who doesn’t want to be the best? What CEO is going to stand up in front of a group of employees and say, We’re going to place fourth! That’s not very inspiring.

Here’s the hard truth: If your name isn’t on the Fortune 500 list, your organization’s chances of you being the “best” are essentially zero.

Here’s the harder truth: You need to stop trying to be the best, so that you can be something better.

##

The quest for favorite instead of best.

Admitting that you have no chance of being the best, and worse, that you should stop trying, might seem antithetical…un-American…un-Chinese…stupid…fatalistic…wimpy…or words even a honey badger like me is not comfortable saying in mixed company. It’s really none of those things.

Let’s tear apart the underlying logic of “best” for just a moment.

My dad had a way of talking about “best” that I still use: People can tell the difference between good and bad, but they can’t tell the difference between good and best. In other words, you don’t need to be “best” to compete with the “best”, you simply need to be “good enough.” The “good enough” standard, for most organizations, is indeed attainable. It’s simply that they invest too much energy chasing every last “sigma” of operational excellence at the expense of something far more important for their future. Instead of investing that money chasing the last smidgen of incremental operational improvement, let’s aim higher.

There’s only one thing better than best: Favorite.

Operational excellence is challenging, but it is also logical, objective, easily measurable, and therefore, easily improvable and predictable. Favorite on the other hand, is no less challenging, but it is emotional, subjective, difficult to measure, and therefore, devilishly resistant to improvement and prediction.

And yes, all large organizations want to be your favorite, and some even manage to do so … for a little while. But if you look at their public disclosures and balance sheets, you’ll realize immediately that “favorite” is not their goal, operational excellence is, usually by a factor of 10 to 1. They do that for obvious reasons: Investors understand OpEx expenses. They can measure them. The can evaluate them. They can reward (and punish) management for them.

As a smaller organization or startup, therein lies the opportunity. Without the pressure of public disclosures (for most), they can focus their energy on “good enough” OpEx investments and redirect the surplus into FavEx investments.

Unlike “best”, the beauty of “favorite” is that it can take many different forms. There is no one way to be someone’s favorite. Let’s have a look at some examples of how to make that type of investment:

  • Quirky: Trader Joe’s – As a grocer, Trader Joe’s has a limited selection, small stores, and a bevy of operational challenges. But people drive past other “better” grocery stores to shop there. Why? Because Trader Joe’s is their favorite.
  • Funny: National Library of Scotland – Being “favorite” isn’t limited to businesses. The National Library of Scotland pokes fun at its culture’s own odd phrases and colloquialisms. For example, check out their Twitter feed to learn what “Flumgummery” means.
  • Purposeful: Love Your Melon – You also don’t need to be a “national” brand. Love Your Melon makes hats (and other cool weather accessories) and donates 50 percent of its profits to fighting pediatric cancer. I know several people irrationally attached to their beanie. And yes, they’re nice hats, but are they the “best” hats? It doesn’t matter. They’re your favorite.

All of this is not to say becoming a “favorite” is easy. It is not. In many ways, this effort will be far more challenging. Most organizational leaders feel that emotional excellence is so difficult to measure and so uncertain in its return on investment that the effort would be better spent on a new piece of equipment. Favorite is squishy. (I’ve heard that particular criticism more than once from more than one CEO.)

But tell that to the fledgling Zappos in 1999. Nick Swinmurn and Tony Hsieh operationalized “favorite” with an unyielding focus on human-centric service. They innovated new ways to hire and train friendly people, they treated them well, and they let them be themselves. Yes, they measured plenty of things. And yes, they needed to deploy OpEx initiatives, but they focused on “favorite” measurements, accepting that some of the human stuff needed to go on instinct.

In 2009, Amazon purchased Zappos for nearly $1 billion.

Not bad for “squishy,” huh?

Executing FavEx programs will take at least as much dedication and effort as your most complex OpEx programs. It will focus on:

  • articulating a vision for favorite that excites you as a leader (if you’re not committed, no one else will be)
  • modeling the behaviors necessary to achieve that vision versus replying on command and control
  • fostering cultural change, and allowing “favorite” to evolve from the inside out
  • allowing your employees to help you define what favorite means, to share the mission with you (in other words, letting go)
  • finding way to measure customer joy, not simply customer satisfaction
  • building trust and authentic human relationships

Investing in “favorite” is not the same as investing in “trendy” (as is often the confusion). Trendy investments flare fast and burn out quickly. True “favorite” relationships will start out modestly, building slowly over many days, weeks, months, and years. They will only begin to pay dividends much later. There’s no way to rush them.

But here is the payoff: OpEx investments may pay off more quickly, but they diminish in value over time as others duplicate your success for themselves. FavEx investments pay off slowly, but grow in value over time, eventually becoming inimitable.

Favorite is the ultimate competitive advantage. As a smaller organization, it may be your only one.

 

###

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

 

###

Source notes for this article:

For this article, I decided to embed all of the appropriate links. This is more of a thought experiment and reminiscences on 25 years of OpEx and FavEx initiatives. I am also careful to shield the identities of the people I speak with … unless they tell me it’s okay. Regardless, for an article like this, I think those specifics were unnecessary.

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

Using Google Maps costs more than you think.

Your creepy stalker ex-boyfriend knows you just left the gym. I’m sure he’s over you.

Google Maps is free, isn’t it?

It seems like a question with an obvious answer, doesn’t it? Of course, Google Maps is free. I’ve never been asked to enter my credit card to look up a new address. There is no subscription plan. There is no pay wall.

But just because you are not exchanging money to use Google Maps does not mean you are not exchanging value. I intend to show you just how much. You might not like it.

We’ll use Google Maps to help us walk through a basic use case and better understand the value exchange, but there are plenty of other examples. Let’s begin.

  1. You’re traveling from Minneapolis to Omaha (a long drive, by the way). By the time you arrive, you’re like to want something to eat. You open the Google Maps app, search for “Omaha, Nebraska,” and then search for “nearby restaurants.”
  2. If you haven’t given the Google Maps app on your phone the permission to use your location information, it will ask you for that. It’s obvious, isn’t it? But think about that for a moment. Google Maps doesn’t need to know where you are to show you restaurants in Omaha. There are no “terms and conditions” to read. There is only an “accept” button. You click it.
  3. Google Maps shows you a list of restaurants, reviews, and distances. Remember, you gave it permission to know where you are right now. That’s cool, huh? Assuming you find a restaurant you like, Google Maps can give you turn-by-turn driving directions with live traffic updates … and with connections to some other apps, and based on your estimated arrival time, even put your name on the wait list for a table so that you can walk right in.

Pretty amazing, isn’t it?

For many of us, this use case is so routine that it’s almost unremarkable. But for anyone used to car trips with the family as a kid in the 1980s (and the inevitable and horrifying gas station restaurant food), Google Maps delivers something close to magic.

In fact, the experience is so magical that we often don’t think beyond that simple interaction.

Let’s do that, shall we?

 

Here’s the part of the value exchange that you might not see.

  1. What restaurants did Google Maps show you? Unless you searched for a specific restaurant, you likely saw only those restaurants that paid for contextual advertising on that search. (At the very least, you saw the paid listing first, and on a small mobile screen, you may not have scrolled past them.) No, a human being didn’t make the decision to show you one restaurant versus another. An advertising algorithm did. Someone at a “top result” restaurant decided they wanted to appear when you typed in the “restaurants in Omaha” search.
  2. To run that advertising algorithm, Google needed to aggregate historical user data so that the restaurant would know how much to pay to advertise against those searches. The advertiser does not see your individual data when you run your search (nor will they at any time), but Google uses that data to judge demand for any specific search. That’s how Google makes a vast majority of its revenue: Advertising. By using Google Maps, you are improving that advertising engine with both your individual and aggregate data.
  3. In a similar way, Google uses your data to plot driving/transit/footpath options to your destination. At the aggregate level, Google uses that data to generate live traffic reports. There’s no Google Helicopter flying over Omaha as traffic reporters did in the 1980s. Their solution is more complicated, but it’s quite a bit safer and more effective: If Google notices a lot of users on the highway, and also notices that they are all moving slowly, it adjusts its time arrival estimates.
  4. All of Google products and services interconnect. That’s why you’ll see Google Reviews for those restaurants. (Actually, Google sometimes gets in anti-trust trouble for not showing you competitors’ ratings systems.) Most people aren’t going to stop searching for a restaurant to submit a public comment to a regulator complaining that they’re not receiving Yelp reviews alongside the Google Reviews. People are busy. It’s understandable. But part of the value you’ve just exchanged is the ability for Google to lock out an alternative service and keep that revenue for itself.

Okay, so you’ve exchanged more value than you thought for the use of Google Maps, but there’s still no money out of pocket for you. You’re still winning, right?

In fact, most of you might agree that more contextual advertising is better advertising. Additionally, you might understand why Google needs to collect individualized data so that it can aggregate it and deliver useful services back to you. What’s more, someone needs to pay for all this, and you’re glad it’s not you. Advertising, especially if it’s good advertising, is a pretty small price to pay. And the anti-competitive concerns? They’re a bit beyond your pay grade. Other people will take care of that stuff. You’re hungry. And Google Maps solved your problem.

At this point, I can’t disagree. The logic holds up. But how about we take just one more step? After we’re done, I want you to ask yourself if you’re still comfortable using Google Maps.

 

There’s a bigger market for “you are here” than you thought.

Here’s the first thing to understand about most location apps: Once you give them permission to track your location, they’ve got it until you turn it off. That means when you clicked “Accept” that one time, most apps have the authority (and ability) to collect information about you while you go about other activities. In fact, that one app may have shared location data with other apps … again, all with your “permission.”

So. What happens next?

If you’re like most people (me included, until recently) the answer was I don’t know.

Last week, the New York Times answered that question. They certainly weren’t the first, but they absolutely have the largest reach, and their journalists know how to tell a good story. You can read the full article for yourself, but let me quote directly the crux of their findings:

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.

Unlike sometimes-justified / sometimes-not criticism of the New York Times, I don’t see a “big business conspiracy” around every corner. Most business people are people too – they’re your colleagues, siblings, parents, and friends. They’re also customers and users of these products. Most businesses simply are trying to earn an honest profit providing a reasonable service in an increasingly competitive world.

But the fact remains: now that these apps have your permission (and there are a lot of apps that do this), and they have the location data your phone generates, they create something of value. That value creation is like crack cocaine to the average marketing VP, chief executive, or controller. In many ways, location data is some of the best data to have because it is not based on your opinion (likes, shares, comments) but rather it’s based on your behavior. As our grandparents taught us: Actions speak louder than words.

And wow are we ever speaking with our actions. You may wonder if you are “interesting enough” to warrant deeper interest from Google (I did). But when you consider the vast array of potential interested parties, you can see how you just became the most interesting person in the world. Let’s look at just a few of the reasons other parties are interested in your location data:

  • Retailers (and investors in retail operations) are interested in actual foot traffic, not “estimates” of foot traffic. By merging mobile phone data with real-time foot traffic, retailers know the quality of potential customers as well as the quantity of them.
  • Employers love location data. It helps them reconfigure building layouts to optimize placement of both individuals and teams. On the darker side, it also allows employers to know how often you use the restroom, if you and a colleague are having, ahem, a relationship, or how long you spend tethered to your desk.
  • The days of ambulance chasing lawyers are long gone. With location data, they can send ads to any mobile phone in the emergency room of your local hospital.
  • Law enforcement is a special case. They can subpoena your mobile phone records for a variety of legal reasons, but usually with probable cause. But with the technology available to advertisers and others, law enforcement can watch known high-crime areas and merge that data with publicly-available mobile phone data – data that you freely provide.

That’s just a few. I could go on.

I’ll bet even with those few examples, you are getting a sense for the broader market for “you are here” than you ever thought possible. Yes, you’re getting a “free” service, but you’re also trading away more than you bargained for.

Even at this point, I can see an argument that goes like this: Well, this is aggregated data, right? If it’s aggregated with millions of other people (or at least dozens of others), picking me out of a crowd is difficult. I can still blend in, right?

 

Russian hackers, Nigerian princes, and your stalker ex-boyfriend can pick you out of the crowd.

Read this and let it sink in: You are not anonymous.

Just because Google’s data center is secure, and its partners are bound by its terms of service, does not mean either it nor its partners are invulnerable to a coordinated hacking attempt. As you may recall, it wasn’t Target Corporation’s IT department that caused its massive 2013 data breach, it was a third-party contractor with lax controls.

Just because you’re a United States citizen doesn’t mean the rules are the same in other countries. Frequent visitors to Russia are “pretty sure” they’re being tracked. Frequent visitors to China are “absolutely sure” they’re being tracked. And once those governments have your unique device identifier, they can identify you when you return home.

Just because Google (or Facebook, or whomever) has a “policy” about data privacy doesn’t mean it will stay that way. Silicon Valley’s spinning moral compass don’t give me a warm fuzzy. Google might be in the public eye, but what about that concert app you downloaded? Did you read their policy? Probably not. Do you think that comparatively tiny company cares as much as Google about privacy? Probably not. Do you think they have the resources Google does to protect your data? Probably not. Put even more simply: Policies are policies, not laws.

Here’s the most important part: just because one source of data is aggregated, doesn’t means there isn’t individual data in the public record. This is the real beauty of the New York Times reporting. With a few simple steps, their journalists and technicians – with no supercomputing power or complex artificial intelligence – could link up aggregated user behaviors to public databases (housing, political donations, etc.) and reverse engineer individual people from the aggregated data.

Well, fuck.

 

Let’s ask ourselves some tough questions about location services, shall we?

  1. Is using location services worth the invasion of your privacy?
  2. Is using location services worth you getting fired from your job?
  3. Is using location services worth getting your private relationship details exposed?
  4. Is using location services worth getting financial data stolen?
  5. Is using location services worth being stalked?
  6. Is using location services worth your children being followed to school?

I wish this was simply hyperbole or an academic exercise. I wish I could believe tech CEOs when they tell us that “everybody wins” when we all use these location-based services. I, for one, am tired of “winning” like that.

I wonder what happens when consumers start to think that they’re paying too much for “free” services. I wonder what happens to tech company valuations. I wonder what happens when consumers start opting out.

 

If you’re not ready to “opt out” just yet, here are a few things you can do to protect yourself:

  • Learn how to turn off location services. Here is how you do it on Apple and Android
  • Clean up unused apps on your phone. If you haven’t turned on an app in a year, delete it and all its data. That won’t prevent it from using data it has already collected, but it will prevent you from providing more. And more apps than you think collect location data.
  • Buy a “Prince box” for your phone. What’s a “Prince box,” you say? When I visited Paisley Park (the home and studio of the late artist), I could take my phone…but I needed to carry it around in a locked, RFID-proof case. You can buy one too. Here’s an option.
  • If all else fails, turn your phone off when you don’t want to be monitored. Don’t simply put it to sleep.
  • Start signing up for services that allow you to monetize your data. These services are not ready for prime time for the most part, but they all you to take some control over your data, and more importantly, begin to train consumers that their data is an asset to be monetized. I like this one.

 

Worried about your customers getting wise to you? Here are some things you can do as a business to respect your consumer’s rights:

Finding a tech-workaround isn’t the answer. It will simply erode trust and postpone the day of reckoning. Forward-thinking companies (Apple, for one) are already deploying these techniques to stay on the right side of all of us:

  • Be transparent. Tell people why you are seeing an advertisement, why you need to give your location, and for how long you need it. Instead of leaving location services on, turn off tracking automatically when you’re done with that explicit need.
  • Allow people rate the quality of what they’re seeing and the service you provide in real time. You’ll have better data on your service that you can use to improve it.
  • Give people the option to pay you for your service easily and securely. YouTube Red (aka YouTube Premium) does this to allow consumers to opt out of ads. (Despite that, they still track you, so I call it a half-right idea. I’d pay for YouTube Platinum for them to avoid tracking me altogether.)
  • Destroy identifying individualized data as it is created. If you never have it, you’re never tempted to abuse it, and it can never fall into the wrong hands (either a hacker or an acquirer).
  • Default to an “opt in” versus “opt out” philosophy. It’s better for you anyway; you’ll know that your customers are truly interested in your service. (Bluntly, I wish this worked better than it does. Email (CANSPAM) and National Do Not Call do this already, although they haven’t reduced my inbox spam nor have they reduced junk calls to my mobile phone.)
  • Use your clout to lobby for GDPR-style legislation in the United States. It’s not perfect, but it has a place, and it’s going in the right direction.

Consumers are getting angry. They may not be able to put their finger on it, but consumer advocates, journalists at the New York Times, and writers and researchers like me are ripping back the curtain and directing consumer rage where it belongs.

If you’re a smart consumer, you’ll protect yourself and take action. It is only a matter of time before the abuse of location services puts your life and livelihood at risk.

If you’re a smart organization, you’ll get in front of this. Because in the not-too-distant future, “treating people as you would like to be treated” might be your most important product.

 

###

About Jason Voiovich

I am a recovering marketing and advertising executive on a mission to rehumanize the relationship between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, I invite you to connect with me on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to me personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit my blog at https://jasontvoiovich.com/ and sign up for my mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

Source notes for this article:

 

How The Times Analyzed Location Tracking Companies

Want to know how the New York Times crunched the data to pull out individual people among aggregated data? This article walks you through the process. It’s transparent, and more than a little creepy.

 

Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

This is the article itself. Again, it’s not the first, but it’s the best one of all I reviewed. It’s worth the 20-30 minutes it will take you to carefully read it.

 

Finally, I think it’s only fair to provide you direct links to Google’s policies and safety tips. Here are a couple of good starting points:

https://safety.google/privacy/ads-and-data/

https://policies.google.com/privacy#infosharing

I picked on Google, but please understand, they are at least open about it. Additionally, they are highly visible. There are plenty of people (like me) who will pounce if they make a change. But the thousands of lesser-known apps from no-name developers? Good luck.

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

The Bullshit Algorithm

If you use Swiffer WetJet, you are a puppy murderer.

But wait, you say. How could I? P&G would never lie tome about the safety of their Swiffer® WetJet™?! But of course. All of those “chemicals.” How could I be so stupid!

Yep. You’re a cold-blooded murderer. Wasn’t it lucky that you can tell your story on Facebook? Now, no one else will need to suffer what your family has suffered. You can warn us. Why don’t you go ahead?

Well, okay. I’ll tell you…

I recently had a neighbor who had to have their 5-year old German Shepherd dog put down due to liver failure. The dog was completely healthy until a few weeks ago, so they had a necropsy done to see what the cause was. The liver levels were unbelievable, as if the dog had ingested poison of some kind. The dog is kept inside, and when he’s outside, someone’s with him, so the idea of him getting into something unknown was hard to believe. My neighbor started going through all the items in the house. When he got to the Swiffer Wetjet, he noticed, in very tiny print, a warning which stated “may be harmful to small children and animals.” He called the company to ask what the contents of the cleaning agent are and was astounded to find out that antifreeze is one of the ingredients.(actually he was told it’s a compound which is one molecule away from anitfreeze).Therefore, just by the dog walking on the floor cleaned with the solution, then licking it’s own paws, and the dog eating from its dishes which were kept on the kitchen floor cleaned with this product, it ingested enough of the solution to destroy its liver.

Soon after his dog’s death, his housekeepers’ two cats also died of liver failure. They both used the Swiffer Wetjet for quick cleanups on their floors. Necropsies weren’t done on the cats, so they couldn’t file a lawsuit, but he asked that we spread the word to as many people as possible so they don’t lose their animals.

Source: Snopes.com 

Of course, this is a hoax. You may have seen it make the rounds last year…perhaps as recently as a few months ago. But doesn’t it sound convincing? It should. As a professional persuader, I can help tell you why. This story has lots of goodies(17 in fact, but more on that later). Let’s recap the top four:

  1. The helpless and innocent subject: Who is more innocent than the family dog? He doesn’t know better. It’s your job as the owner to protect him from harm, and you failed.
  2. The details:It wasn’t just “a dog”, it was a “5-year old German Shephard”. It wasn’t just that the dog died, it was the sequence of events of walking on the floor, licking his paws, eating from dishes kept on the floor.
  3. Seemingly scientific facts: The writer was brilliant here. If he or she has given the chemical formula, most people would have buzzed right by it. But “one molecule away from antifreeze” … now that’s scary!
  4. Corroborating evidence: The neighbor’s cats also died of similar circumstances (liver failure plus Swiffer WetJet usage). Just in case you thought this might be an isolated incident, your pet is in danger too!

If I were trying to damage the sales of the Swiffer WetJet product line, I could hardly do better. Yes, stories like this one made the rounds before the rise of Facebook, but their impact was much more limited. In the time it took misinformation to spread, the product owner would have the time to craft and spread its own rebuttal. If the situation were serious enough, it could run advertisements. It could update its product packaging. It had options.

But today, stories like this one “go viral” so quickly and with such ferocity that P&G had no time to mount a defense. Yes, Snopes will (eventually) debunk the story, but that can take weeks. By then, sales suffer, and consumer trust erodes.

Isn’t it funny? Wasn’t the promise of data-driven, search engine and social media algorithms that they would amplify the truth and protect us from misinformation by tapping the wisdom of crowds? The fact is that they do not. And cannot. Because that is not what they are designed to do. At the heart of every social media algorithm is a fatal flaw that values persuasion over facts.

Social media platforms (as well as search engines) are not designed for truth. They are designed for popularity. They are bullshit engines.

To understand how we got here, we need to take a step back and understand bullshit.

- You lied to me. - It wasn't lies. It was just bullshit.
My dad loved this movie. This is a classic scene.

Best. Academic. Paper. Ever.

Harry G. Frankfurt, professor of philosophy at Princeton University asked the obvious question in 2005:

“One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share.But we tend to take the situation for granted. Most people are rather confident of their ability to recognize bullshit and to avoid being taken in by it. So the phenomenon has not aroused much deliberate concern, or attracted much sustained inquiry. In consequence, we have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves.”

One of the oddest things about this paper, and I highly recommend you read the entire 20 pages, is the thorough disassembly of a topic everybody knows exists, but no one seems to understand.

Frankfurt made bullshit a technical term.

Here’s the crux of it: Most of us tend to think of the world in terms of facts and fictions, truths and lies. As we become more sophisticated, we understand people can have different perceptions (read:opinions) about the value truth brings or harm lies cause. However, those opinions exist on a different level than the “objective foundation” of fact and fiction.

Professional persuaders know this is not the way the world works.

The purpose of much of the communication we see – between people in our private lives, our consumer relationships, and the political sphere – is not to illuminate the truth, but rather to persuade. In fact, a mix of truths, half-truths, and outright lies is a great way to do it.Real facts are messy, incomplete, and often contradict each other. Outright lies can be fact-checked and objectively disproven. On the other hand, a skilled bullshitter can weave a tidy and convincing story based on a mix of facts and fictions. Facts are indeed objective facts to the bullshitter, but their value is not their factual basis, but rather their ability to persuade. A half-truth or lie might do just as well. The entire spectrum is at the bullshitter’s disposal, where his non-bullshitting competitor only has the facts. It’s not a fair fight.

Bullshit, aka“truthiness.”

Frankfurt makes the case that bullshit has a place in everyday life. Without it, we would be paralyzed with uncertainty and unable to make the simplest decisions and tend to the most basic relationship tasks. (Are you really going to tell your husband his haircut looks stupid?) Bullshit is as natural as…well…bullshit.

So, if bullshit is natural, and perhaps even necessary, where’s the problem? We’ve been dealing with bullshit since the instant we developed culture and language. What’s different now?

The Search Engine, Social Media, Data-Driven (Bull)shit Storm

The internet generally, and social media specifically, is not a truth platform, it is a popularity platform. That might come asa major surprise to many of you, or as blindingly obvious, but it’s important to unpack how these algorithms work so that we can understand the depth of the bullshit problem.

The bullshitty foundation of the internet as we know it: Search and Social

At a high level, how does a search engine algorithm work?The basic concept is authority. In short, that means how credible one source of truth is than another. In some cases, that’s obvious: Your state’s department of motor vehicles website is probably a more authoritative source for driver licensing procedures than your cousin’s floral arrangement blog. But it’s not humans that make those judgments. Algorithms need to do that work for obvious reasons of scope and scale.

Those non-human algorithms need clear rules for how to determine credibility. One of those important rules is simple: How many other websites link back to that one website for a particular search term or function? Link backs are an important proxy for credibility. Yes, it’s more complicated than that (Google, Bing and others strip out obvious gaming of that system), but at its heart, “authority” equals “popularity”, not truth, and not facts.

In other words, your cousin’s floral blog could become a leading authority on driver licensing with enough time and effort … and others agreeing that it is an authoritative source by linking to it in the context of that search term.This is the “wisdom of crowds” idea in a nutshell – the ultimate authority rests in shared agreement of “truth,” not actual truth based on objective facts.

Let’s translate:Sometimes search engines are right. Sometimes they’re wrong. But they always represent persuasion and popularity. Search engines are bullshit engines.

Let’s translate again: That little search window on your computer that you rely on to find facts is feeding you bullshit. Remember, true bullshit has some fact and some fiction, but it’s all persuasion. So yes, you’re getting some facts, some of the time. But just as often you’re getting hoodwinked.

If a search engine is a bullshit engine, social media is a bullshit rocket.

Social media algorithms completely dispense with the idea of truth. They are designed to enhance social connections. What drives a social media algorithm is something more than authority in a search engine (although that still matters). The most important driver of the algorithm is engagement, aka social proof. That takes the form of likes, clicks, shares, comments, reposts, etc.

The higher the engagement, the more authority the post (and author) have, especially when certain posts “go viral.” All that means is that the engagement rate gains enough attention fast enough to feed on itself, bending the exponential curve.

Most of the time, what goes viral are puppy videos, prom dances, pratfalls, and pornography. Mostly harmless, but let’s ignore those for now.

Every social media algorithm – every one of them – uses some proprietary combination of those factors (along with advertising dollars) to determine what becomes “popular” consistently. It’s not hard to spot. With a little training, you can do it too.

Here’s your first lesson: What story seems more likely to go viral?

  1. Sustained wellness comes from eating a balanced diet of healthy food, lowering stress, and exercising regularly.
  2. Drinking bleach is the most effective way to stay hydrated during the summer months.
  3. You can lose up to five pounds in the first two days using a clove and pomegranate enema.

The first is obvious, but boring, truth. No chance for virality there. The second is just as obviously a lie. (Please don’t try that at home. You’ll die.) The third is pure bullshit, and you can see immediately why it’s so compelling. It seems like it could have some truth to it. That one has potential!

Let this sink in: The two most common ways you learn about your world, the search engine and the social media timeline, are designed from the ground up to feed you bullshit.

It gets worse. You aren’t as good as you think you are at detecting bullshit.

Sure. A clove and pomegranate enema seems like bullshit(although I can think of stranger things). If you try one, I think you deserve what you get. But for most people, when we see examples like that one, we feel pretty confident we can pick bullshit out of our social media feed and safely ignore it.

We’re wrong.

To paraphrase a more famous phrase: You may be able to catch all of the bullshit some of the time, and you will catch some of the bullshit all of the time, but you will never catch all of the bullshit all of the time.

Your social media feed scrolls by too quickly. There are too many stories. There is not enough time. No one has the energy to fact check every story that floats by or every search result that finds its way to page one. What’s worse, until today, many of you believed search engines and social media platforms somehow prioritized the truth over bullshit. They do not. They prioritize authority and popularity – a bullshitter’s two favorite foods.

The average person sees thousands of search engine results and social media posts each day. You physically cannot fact check them all. No one can. It is a virtual certainty you have been bullshitted today. And the worst part? You don’t know which ones they were.

If we’re going to be continually drenched in a bull shitstorm, we could use an umbrella.

I think it’s only fair we built our own bullshit algorithm.

To the uninitiated, an algorithm seems like some bizarre technical concept that only engineers and programmers can understand – that you need to learn special language skills or grow a thick beard. You don’t. An algorithm is super easy: It’s a set of rules. Let’s write a simple one right now, shall we?

IF the weather outside EQUALS “raining”,

THEN pack an umbrella.

Yep. That’s it. That’s all there is to it. In fact, algorithms are all around you. All recipes are algorithms. So is (essentially)all of mathematics. You are so familiar with algorithms that you write, perform, and revise them every day without thinking about them. And yes, software algorithms (like those designed to drive an autonomous car) are super complicated. But that doesn’t mean we should be scared of the basic premise.

Anyone can do it.

Remember the game “20 Questions”? That game was a sort of algorithm. Here’s my adaptation for detecting bullshit.

Step 1: Open your social media feed and pick out a story. It can be any story.

Step 2: Read the story and answer the following 20 questions.

Step 3: The more questions you answer “yes” to, the higher the likelihood that story is bullshit.

Does the story…

1. …feature a powerless, helpless, or disadvantaged victim?

2. …push a political or identity hot button?

3. …result in the most dramatic outcome possible (death versus injury)?

4. …include irrelevant details (details not directly relevant to the crux of the situation)?

5. …suggest a simplistic next step or action (get rid of X, stop eating Y)?

6. …include a “twist” in the story, a surprise, or a big reveal?

7. …feature “scientism” (little evidence with big conclusions)?

8. …include hard to verify evidence (no links to reputable source, or only links to other non-authoritative sources)?

9. …use anecdotal versus statistical corroborating evidence?

10. …make grammatical or spelling errors, or use clumsy language?

11. …use over the top emotional appeals incongruent with the situation?

12. …use scientific jargon (e.g. “dihydrogen monoxide” instead of the more common “water”)?

13. …attempt to be relatable using the experience of people “like you”?

14. …make spurious correlations (seeing patterns of related items that could have other causes)?

15. …dangle dread (chemicals!) without explaining the context of risks?

16. …push for urgent, immediate action?

17. …include charts, graphs, images, or videos that don’t have anything to do with the core features of the story?

18. …hint at a conspiracy, that someone is hiding something (ideally, a “big corporation” or “big government”)?

19. …publish first in a “bullshit attractor” (TED Talk, Facebook, etc.)?

20. …include statistics touting its popularity (e.g. how many people are talking about this)?

Let’s apply our new Bullshit Detection Algorithm to our Swiffer story from earlier. How’d it score? Pretty well, actually! It received a 17 out of 20 by my count. How could we have made it even bullshittier? (Remember, you don’t have to stick with the facts.)

Item 2: Add a detail about the owners of the dog as “Trump supporters.”

Item 18: Hint that the author knew some who worked at P&G who “had information” about these pet deaths, but she would be fired if she said anything.

Item 20: Include the number of “likes” or “shares” in the article, showing its popularity.

Easy, isn’t it?

Where do we go from here?

It’s not realistic to take every story you read through your new Bullshit Detection Algorithm. It’s also not realistic to stop using search engines and social media. They are too ingrained in the fabric of our daily lives. Maybe we should crowdsource a Chrome plugin to help automate the process of Bullshit Detection…to fight fire with fire? Let me know if you’ll throw in20 bucks.

But at the very least, you can rest easy that you didn’t kill your dog by cleaning your floors with a Swiffer WetJet. And if you’re considering losing weight using a clove and pomegranate enema, you might want to try your new Bullshit Detection Algorithm first.

###

About Jason Voiovich

I am a recovering marketing and advertising executive on a mission to rehumanize the relationship between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, I invite you to connect with me on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to me personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit my blog at https://jasontvoiovich.com/ and sign up for my mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

Source notes for this article:

Swiffer WetJet PetDanger

No bullshit here. You can read the story (and the fact checking) for yourself.

Swiffer WetJetHardwood Floor Spray Mop Starter Kit

If you’re curious, you can see the ingredients list for yourself. I’ll warn you, P&G doesn’t give you a chemistry lesson. For example, you’ll need to find a different (authoritative) site for more information on the ingredients.

Here is a link for more on PROPYLENE GLYCOL n-BUTYL ETHER.

Again, unless you have a background in chemistry, more information can get even more confusing.This is the one “chemically close” to Antifreeze, or ETHYLENE GLYCOL…that’s whyI picked it. It’s clearly not antifreeze, but it sure sounds like it, doesn’t it? If you look at the CDC entry for this one, however, it’s basically screaming at you to run to the hospital if you ingest too much of it. Lots of chemical names sound the same, but are very different. I sort of wish I paid more attention in chemistry in college…

On Bullshit, Harry Frankfurt, Princeton University

This is what I hoped every academic paper would be like in graduate school, and while some were quite good, and informative, and interesting, nothing was as satisfying to read as Frankfurt’s 20 pages. Thanks, Jeremy Rose, for having us read it in grad school!

Categories
Audience Empowerment Long Form Articles Rehumanizing Consumerism

Should you quit Facebook?

Take advantage of Loss Aversion and develop your own decision-making superpower using the 2X Rule.

Most consumer decision-making models fail to satisfy us because they don’t account for how our brains work – especially on decisions where we are giving up something rather than getting something. Moving to a new neighborhood, changing jobs, and quitting smoking all fall into the giving up category. I’m going to let you in on the secret persuaders (like me) use to keep you stuck, and how to take back control.

Let’s practice using the decision de jure: Quitting Facebook.

First, let’s get one thing straight. You are indeed buying Facebook, even though it claims to be “free and always will be.” Your time and energy, along with your data, are assets whether you think of them that way or not. You are giving Facebook those assets in exchange for using their service. The exchange of actual currency is not required for value to change hands. Put a different way, Facebook could not exist without your input. With that in mind, we can apply consumer decision making tools.

But applying those tools is difficult in today’s hyper-charged, hyper-partisan environment, isn’t it? Consider this: Ten years ago, you couldn’t throw a rock without hitting a journalist or financial analyst gushing about Facebook – its meteoric growth, its culture, Zuckerberg’s hoodie, Sandberg’s book. Today, those stories are all gone, replaced with stories about Russian hacking, voter manipulation, data privacy breaches, Zuckerberg’s ineptitude, and Sandberg’s spinning moral compass.

So, what is it? Is Facebook your salvation or is Facebook your damnation?

The reason the answer is so difficult is that we’re asking the wrong question. What’s good for the society (is it time to “praise” or “punish” Facebook?) is a different question than your personal value exchange with Facebook (“should I stay or should I go?”). In other words, big picture factors can play a role in your decision, but to make an effective and sustainable consumer decision, the choice must be on your terms, and your terms alone. You can’t outsource your decision to an editor at the New York Times.

Let’s get to work.

The 2X Rule

The 2X Rule is as simple as it looks: In order to change course, the benefits of your decision must outweigh the drawbacks by a factor of two to one. If you decide to leave Facebook, the benefits of your decision to leave, in your estimation, must overcome the drawbacks by at least double. It’s really that easy.

But it doesn’t seem that way, does it? The first time I mention the 2X Rule, it strikes most people as too much. The logical part of our brain kicks in, and we assume if the scales are balanced in a decision, only the weight of a feather on one side or the other will tip the scales. 1.0001 to 1 should be all it takes.

Your brain doesn’t work that way.

To feel good about your decision, the positives need to outweigh more than their fair share of negatives. We’re wired to avoid loss more than we are to seek pleasure. It’s a survival instinct that makes decisions to leave a situation (a job, a relationship, or a brand) much more difficult. That’s why persuaders, politicians, and marketers work so hard to get you to start something. Once you do, it’s harder to leave than you’ll give it credit for at the outset.

I’m not going to try to train you to set aside the emotional part of your brain and make a “logical” decision to leave Facebook. Ultimately, nagging regret means you won’t feel satisfied with your decision. Instead, I’m going to teach you a way to harness your brain’s natural ability and tendencies. It will only take three steps.

Step 1: Accept that all complex consumer decisions will have multiple upsides and downsides

If making the decision to leave Facebook was as easy a choosing a place to have dinner, you would have done it already. This clearly isn’t a simple decision. With 2.27 billion users, Facebook doesn’t simply provide one clear benefit or one clear drawback. It’s complicated. The first step in using the 2X Rule is accepting this complexity as a fact of life and resolving to use it to your advantage.

(As a quick aside, all important decisions should be made on paper – virtual or physical. Most people are visually-oriented; handling any more than a few factors at a time will overwhelm our short-term memory. In other words, don’t try this in your head.)

Let’s start with a list of factors that should come easy in today’s anti-Facebook zeitgeist: The downsides. Remember, these might not be your downsides. Only you can decide what they mean to you. We’ll do that next. But first, let’s write them all down.

Here’s my quick list of Facebook downsides:

  • Too much Facebook makes you both angry and depressed without a corresponding increase in positive emotions.
  • Facebook is a business, not a public service. In other words, Facebook is out to make money using you and your assets as the product.
  • Facebook’s management has been less than transparent about its practices, specifically, how it mines your data to keep you engaged (addicted) to its service.
  • Despite a small army of smart people, Facebook is subject to outside manipulation, not only by advertisers, but also by foreign governments. The company is trying to reign that in (sort of), but the effort is a lot like a game of whack-a-mole. It is unlikely to ever stop completely.
  • Specifically, outside parties used Facebook to tamper with the outcome of the 2016 US Presidential election. The impact of that tampering is debatable, but the act itself is well established.
  • Facebook is a haven for fake news, hoaxes, and scams.
  • All social networks make it much easier to harass vulnerable people online. Instead of doing it to their face, or physically following someone, you don’t have to leave your chair.
  • More recently, black employees at Facebook claimed the company creates a hostile work environment.
  • Facebook gives voice to (and helps connect) dangerous, racist, and hateful ideologies, reducing their perceived social isolation and shame by providing an echo-chamber of like minds.
  • Facebook use is a distraction to productivity at work and family relationships at home.

Ugh.

After reading that list, you might be even more ready to leave Facebook. And I know I didn’t catch everything. You may have your own (much longer) list of negatives. That’s okay. Add them, but don’t make your decision just yet. Let’s look at the upsides.

Here’s my quick list of Facebook upsides:

  • All social networks, but Facebook in particular, connect communities across time and space, allowing disparate people to find each other, forging a sense of connection fundamental to the healthy human condition.
  • Facebook encourages social organization and political activism, increasing the number of people involved in these causes by reducing the barriers to participation.
  • Facebook provides advertisers the ability to hyper-target messaging, meaning that the advertising you see is much more relevant to your actual needs and wants.
  • Facebook allows you to find and stay connected to geographically separated family and friends – especially long-lost friends and family.
  • In many countries, Facebook is the only way to politically organize.
  • Social networks allow for a level of self-expression largely unfiltered and unmediated. That includes publishing, photography, hobbies, and art.
  • Facebook allows you to meet new people virtually, reducing the perceived risks and countering shyness.
  • Facebook is a source (and sometimes the source) of connection for older people or those people with disabilities.
  • Social networks allow small businesses to reach potential consumers without the barriers and costs that traditional media put in their way.
  • Social networks encourage an open society, wider-ranging dating options, and a broader spectrum of often-marginalized viewpoints.
  • Social networks help hold governments, corporations, and other media (and itself!) accountable.

Huh.

Doesn’t that seem better? You might have forgotten about some of those benefits…or at least, perhaps you hadn’t thought about them in a while.

If you stop here in the decision-making process, you’ll fall victim to outside persuasion. Actually, people like me hope you stop here. For most people, a confusing list of upsides and downsides is enough for us to throw up our hands and give up. But take just two more steps and you’ll develop your own superpower.

Stay with me.

Step 2: In a state of complexity with difficult to evaluate criteria, it is safest to assume all criteria are all weighted equally

If you looked at both lists and struggled to weigh one upside against another downside, you’re not alone. If this were an engineering exercise, and you could objectively measure the value of “connecting people” versus the risk of “providing a platform for racist views” using some mathematical formula, your decision would be easy. But there is no good way to do that. Decision making theorists (Daniel Kahneman, and others) encourage us to step away from the morass of trying to assign weights to these factors and simply assume they all have equal weight. It sounds a bit counterintuitive at first, but our impressions and emotions about the weight of “connection” versus “racism” is driven by so many factors, few of which are objective and all of which vary with context, that it’s safest to realize that our minds are not well-equipped to handle this type of decision.

But we can fix that.

Here’s all you need to do: Of the lists above, but a check-mark next to each factor that you perceive is an upside for you. Do the same with the downside list. Don’t try to put “two check marks” next to an “important factor.” Simply mark the factors that matter. Remember, this is your decision and evaluation, not what you believe others should do.

Here’s an example of a hypothetical person’s downsides:

  • Too much Facebook makes you both angry and depressed LINK without an increase in positive emotions to balance the negatives.
  • Facebook gives voice to (and helps connect) dangerous, racist, and hateful ideologies, reducing their perceived social isolation and shame by providing an echo-chamber of like minds.
  • Facebook use is a distraction to productivity at work and family relationships at home.

And now that person’s upsides:

  • Facebook allows you to find and stay connected to geographically separated family and friends – especially long-lost friends and family.
  • Social networks allow small businesses to reach potential consumers without the barriers and costs that traditional media put in their way.
  • Social networks encourage an open society, wider-ranging dating options, and a broader spectrum of often-marginalized viewpoints.

Have you finished your own lists? Good. Let’s take the final step.

Step 3: Decide by taking advantage of the concept of Loss Aversion, aka Prospect Theory

Loss Aversion is something you already know intuitively, and a concept we teach persuaders on their first day of class: People value the risk of loss more than the prospect of gain. In other words, most people will value the opportunity to win $100 less than the risk of losing that same amount of money. The fancy name is Prospect Theory, but I think Loss Aversion is easier to remember.

Daniel Kahneman and Amos Tversky worked out the basic rule in 1979, and it’s been replicated (with plenty of variations and critiques) ever since. Loss Aversion explains why people overbuy life insurance, pay for warrantees, sell stocks at their low point, and do all sorts of seemingly mathematically irrational things.

I’m not going to ask you to make a “rational” decision regarding Facebook. If I did, you would regret it. In other words, let’s say you selected six negative factors and five positive factors in your personal evaluation of Facebook. A rational decision-making model might advise you to “quit”, but your emotional mind would fear the loss of those five benefits more than the corresponding gain of removing the downsides.

The question becomes, How many positives do I need to outweigh the negatives? In Kahneman and Tversky’s experiments, the actual ration varies on a number of factors. Sometimes six upsides will counter five downsides. Sometimes it’s seven. Sometimes more. Sometimes there is no amount of upside that will address the fear of loss (Nassim Nicholas Taleb reminds us this extreme loss aversion is a completely rational response in many circumstances where the “loss” is terminal or severe enough).

We’re going to make the decision process easier with the 2X Rule: In order to leave Facebook, you need to have twice the number of upsides (removal of negatives) to your number of downsides (removal of positives). In other words, if you have five reasons to stay on Facebook, you need ten reasons to leave. If you can cross that threshold, your emotional brain will be able to tell your rational brain to accept your final decision.

How should our hypothetical decide?

Three upsides to three downsides is a one-to-one ratio and does not meet the 2X Rule. Therefore, this person likely will stay on Facebook … and mitigate the downsides.

Just because you decided to stay on Facebook (and accept the downsides you identified as important to you) does not mean you need to ignore them. The process of explicitly identifying your concerns (aka writing them down) makes them more actionable than they were before.

In this hypothetical example, how might you mitigate your downsides?

Possible mitigations:

  • Too much Facebook makes you both angry and depressed without an increase in positive emotions to balance the negatives. Limit your time on Facebook, perhaps using a third-party app or browser extension. Experiment with what works best for you, but consider starting with a 15-minute-per-day limit and adjusting from there.
  • Facebook gives voice to (and helps connect) dangerous, racist, and hateful ideologies, reducing their perceived social isolation and shame by providing an echo-chamber of like minds. Don’t simply ignore posts you consider hateful or racist – use Facebook’s tools to flag them. Your actions will teach the algorithms to better recognize that content in the future.
  • Facebook use is a distraction to productivity at work and family relationships at home. Institute a “No Facebook Rule” at work (unless your role explicitly requires it), during meals, and during time with friends.

So, should you quit Facebook?

In the end, no one can decide that for you. That decision is ultimately yours and yours alone. As an empowered consumer, you have the right to decide if your upsides outweigh your downsides by a two-to-one margin. Only then will you be able to take advantage of your brain’s natural loss aversion, feel good about your decision, and be able to sustain it in the face of outside criticism.

And, by the way, you’ve learned a new superpower you can use anywhere…not only with Facebook.

Go ahead. Decide.

 

Source notes for this article:

Thinking Fast and Slow

I recommend anything by Daniel Kahneman and Amos Tversky. In our field, it is some of the foundational work. As a professional persuader, I can attest to the effectiveness of their findings in my own practice. You may want to start with Thinking Fast and Slow. It’s an accessible book for the average consumer, and even better as an audiobook.

Fooled by Randomness

If you can get over Taleb’s style, there is plenty to learn. I like Fooled by Randomness perhaps the best of all his books. Of his many contributions on this topic, perhaps the best is the concept that extreme loss aversion is completely rational and understandable.

Thirty Years of Prospect Theory in Economics: A Review and Assessment

Nicholas C. Barberis

Yale School of Management

Barberis addresses some of the critiques of Prospect Theory and Loss Aversion. This is an academic paper, but readable, and covers plenty of practical applications to consumer situations.