Categories
Agile Learning Audience Empowerment Audience Engagement Information Management Long Form Articles Rehumanizing Consumerism

Big Data promised less (and better) marketing. It hasn’t worked out that way.

Consumers: So, remind me again why I need to give up oodles of my private data?

Marketing: Well, not only do you get to use our awesome products for free (or for less than their true cost), but we also will use that data to stop bombarding you with irrelevant advertising.  It’s better for us because we can be more efficient, redirecting that money into developing better products and services instead of wasteful advertising spending. And it’s better for you because you see advertising that’s much more useful to you.

You’ve heard some version of this argument from marketing for the past 20 years. If consumers allow marketing to collect ever increasing amounts of data, they will use it to produce more targeted advertising. More targeted advertising is more efficient, meaning that (ideally) marketers should be producing less advertising, not more. As a consumer, you should be seeing fewer promotional messages, and the ones you do should be much better.

Who among you thinks that is true?

I certainly don’t.

Let me walk you through just one example.

My wife and I enjoy cooking at home. We patronize several grocery stores, delis, and kitchen supply outlets to find the just the right ingredients and tools to try new recipes. (A Thai coconut sweet potato soup was our latest win.) As you might guess, one of the stops on our shopping trips is Williams-Sonoma. We’ve purchased all manner of utensils and tools from them over the years, and we were one of the first members of their “email list” – allowing them to collect data on our purchases at the point of sale, whether that’s online or in store.

You would think that Williams-Sonoma would know us well enough through our extensive data trail to target advertising and offers precisely to our buying habits.

You would think that, and you would be wrong.

How do I know?

I ran an experiment.

From February 1 to March 31, 2019, I collected every email Williams-Sonoma sent to us. During that time, we made two purchases, and in both cases, provided our email address. The test is simple: Do the promotional email messages reflect our buying patterns? In other words, does Williams-Sonoma use the data we provide them to deliver better advertising?

Here is the data summary:

n=175 (number of emails)

d=59 (number of calendar days)

n/d=2.97 (emails per day)*

*This measure of central tendency isn’t hiding anything. Williams-Sonoma sent three emails per day, every day, for two months, save for a couple of exceptions.

What did the emails say? I created a word cloud to help visualize the subject lines. You can see that word cloud below.

The most immediate and obvious conclusion is the word “Percent” which relates to some sort of “percent off” offer, anywhere from 20 to 75 percent. This is a typical example:

LE CREUSET **Special Savings** & Recipes + up to 50% Off Spring Cookware Event

The rest of the data set is barely worth an analysis at all: Williams-Sonoma has an inventory of brands to sell us. They’re experimenting with different percentage offers, different levels of urgency (today only!), and different deadlines (Easter is coming!) to get us to bite.

We reviewed all the percentage offers, urgencies, and deadlines: We often buy at full price, because when you’re interested in a specific receipt, you don’t want to wait for a sale. (Wouldn’t you think they’d notice that we downloaded a specific recipe?) We reviewed all the brands featured. We have never bought any of them. (Wouldn’t you think they’d notice what we just bought?)

Here’s the rub: Williams-Sonoma does know all that. They have all of our purchase data, yet they have chosen not to use it.

#

It may seem like I’m picking on Williams-Sonoma, but I could just as easily have picked any number of brands. I suspect you could hunt through your inbox and find a dozen examples of bizarre, irrelevant marketing from brands you patronize as well.

But this was just one example. Other brands do better, don’t they? Perhaps the macro-trend is heading in the right direction, and brands such as Williams-Sonoma eventually will be out-competed by brands who are more efficient and can redirect that excess capital. Perhaps this is just a symptom of struggling retailers. If that were true, what might we expect the macro trends to look like?

First, we might expect that marketing spend would be growing at a rate at least equal to, but ideally lower than, population growth. In other words, the ratio of marketing dollars per person on the planet should be shrinking over time. Is that the case?

The chart below shows global marketing spend growing at 3.9% per year:

Source data

The next chart shows global population growth slowing over time, about 1.0% per year during the same period.

In other words, marketing is spending more per person each year, not less.

But wait, you say. Population growth is not necessarily an indicator of economic growth. It would be fairer to look at global GDP growth over the same period.

Great. Let’s do that.

Source data.

Over the same period, we see global GDP at an average of 3.6% per year. In other words, at an average of 3.9% per year, marketing is overrunning GDP growth by about 10%. And because North America and Western Europe are the largest marketing “markets,” and those regions are growing slower than Asian markets, the overshoot is even higher.

In other words, for all its data, marketing is becoming less efficient over time. Put simply: Big data is making marketing worse, not better.

#

How on earth can that be?

Let’s refute a number of possible alternative explanations.

Explanation #1: It takes a certain amount of time to realign marketing based on what it’s learning from Big Data. What’s more, that knowledge has yet to completely diffuse into the professional community.

Really? It’s been 10 years, and there is no evidence that the growth rate in marketing spend in bending downward. In fact, it’s accelerating. No, marketing knows what it should be doing, but it is not doing it for a much more obvious reason: There is no downside.

Email protection laws are barely enforced. GPDR is just finding its footing n in Europe, but enforcement has been spotty. A state-by-state patchwork of privacy laws in the United States isn’t likely to do much better. Enforcement takes resources. In other words, marketing has no incentive to be efficient.

Explanation #2: We’re looking at the wrong channels. Email (in the Williams-Sonoma example above) is an “owned” channel, meaning the company does not need to follow guidelines as it would on Google or Facebook. Email might be inefficient because it’s “free,” but when marketers are paying for advertising, they do better.

Really? A shift from tough-to-measure analog media to digital, data-driven media over this 10-year period should have resulted in more efficient performance. But look at the growth pattern in marketing spending over the past 10 years and compare it to GDP. You would expect better data to lead to more efficient use of resources as it does everywhere else in organizational operations, but that is not the case.

Explanation #3: You’re looking at average data, and averages can distort the picture. We should be examining the distribution (variance) in the data to truly determine marketing efficiency.

Really? Marketing success doesn’t follow a normal distribution (aka a “bell curve”), it operates on a power law distribution. In other words, a small number of marketing operations and tactics deliver a disproportionate amount of the success. The bottom line is that a vast majority of marketing operations and spending does not generate a positive return on invested capital (ROIC).

Explanation #4: Of course, we know that most marketing doesn’t meet an ROIC threshold. That’s because marketing is an investment in the future of the organization. We’re building a brand, not quarterly returns. Failure is necessary to the learning process.

Really? So, when precisely will “investment” turn into “returns” on that investment? The data over 10 years shows no appreciable return on marketing investment that outstrips economic growth. You may be able to cherry pick organizations or campaigns that deliver good results, but the overall impact is a negative ROIC over the long term.

Explanation #5: You’re aggregate analysis hides material differences in the performance of marketing by industry. Put simply, B2C is not B2B, and doesn’t need to spend as much. Consumer marketing might be more wasteful, but business to business marketing is much more efficient.

Really? My B2B friends, what happens when you count all selling expenses? That includes “marketing”, but it also includes “tradeshows” and “salespeople” and “executive time selling” and a whole host of other goodies you’re probably not counting in the marketing line on the balance sheet. When you do that, B2B is just as out of whack as B2C.

#

Sorry, marketing. I hate to poop in your sandbox, but none of these explanations hold up. As an organizational function, marketing is not delivering a positive return on investment.

Yes, there is plenty of industry scuttlebutt about how consumers are getting pissed off and opting out. Marketing frets over Netflix and Apple end-running traditional advertising channels by switching to ad-free subscription models. But marketing, I wouldn’t be as worried about consumer anger as I would be worried about the next conversation with your CFO.

The party ends the instant the global economy goes into recession. Marketing bemoans the “short-sightedness” of financial professionals when they look at ROIC instead of “brand health” in their calculations, but what are they supposed to think? The rates of growth don’t match, meaning marketing is delivering a lower return on investment, in aggregate, with each passing year.

A shotgun approach to email – per my example above – is simply the canary in the coal mine.

Ask yourself this question: If you needed to get better results with 80% of your current budget, could you do it? If the answer is “no,” you had better start working on a plan. It might be time to actually use all that “big data” you’ve been so excited about.

Because the day of reckoning is coming.

Good luck.

#

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher, but the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

Thank you! Gracias! 谢谢!

Your fellow human.

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

What if someone offered $6,495 for your private data? Would you sell?

What follows is a fictionalized vision of a possible future filled with Data Exchange Networks (DENs) designed to bring the process of private data collection out into the open.

. . .

February 5, 2029

As a fractional research scientist, Lynn Thomas uses her talents to aid a number of clients – from University labs who need an extra set of eyes on experimental design, to corporate R&D departments conducting optical glass experiments, to startups working on new protein-based sweeteners. In 2028, she managed six retainer clients (including one startup where she took equity instead of cash) and felt like she earned a good living. 2029 looks just as good.

But her experience working for an energetic founder infected her with the startup bug. Lynn has had her own idea for a new type of photovoltaic paint since she first read about the idea as a graduate student.

It’s time, she thought. She needs to put up or shut up.

The problem is money.

It’s always money with startups, and that’s especially true in the hard sciences. At this early proof of concept stage, she doesn’t need much money, but enough to purchase the synthesizing equipment, raw materials, and lab time. She figures about $4,000 will cover it ­– $5,000 to be safe. She’s too early for angel or venture capital funding. She’s also too early for legit crowdfunding sites. They want a promise of a deliverable at the end. She’s doing some early stage science. She has no idea if anything will come of her work. It’s too risky. She is on her own.

How will she do it? Take on another client? No. She’s already maxed out. And if she does, she won’t have the spare time she needs. Luckily, she has another option. Thirty years ago, she might have begged friends and family for the spare cash she needed to fund her startup.

In 2029, she has the option to sell her private data.

#

Lynn Thomas prides herself on her rational mind. It got her a scholarship to a private high school, internships at the National Institutes of Health, two master’s degrees paid for by corporate sponsors, and a Ph.D. from Oxford. Still, selling private data on a Data Exchange Network (DEN) still seems a bit sketchy. She had a friend who used one … that DEN ended up selling his data to a dating site, much to the chagrin of his partner. Other DENs are known for bombarding you with advertising. Most DENs don’t pay very well. It’s the last fact that’s the real problem.

But one does pay well: The MENSA DEN.

Perfect, she thought. MENSA made the decision ten years ago to begin cashing in on its membership base. However, they couldn’t simply sell member data. Not only was their data set not as detailed as they thought it might be, their average member was too smart to let them do it without getting paid. (Makes sense, huh? They are MENSA members.) So, MENSA cut a deal: You let us market your data to interested parties, and we will share the revenue with you. Members decide what to share (and what not to). A sophisticated auction market will determine the prices paid. It’s smart, fair, and rational.

Lynn was a MENSA member. That meant she could give the MENSA DEN a try. What did she have to lose?

#

“Siri, open the MENSA DEN,” Lynn said.

“Okay, Lynn. I found it,” the automated voice replied. “The MENSA DEN checked your records and confirmed that you have an active membership in the MENSA organization, but not a DEN account. They say you need to complete a profile before you can enter the marketplace. Do you want to proceed?”

“What kind of information do they want?”

“I’ll check. They say they want some basic demographic information, most of which you already provided in your organization membership. Specifically, they’re missing your current physical address, gender identifier, biological gender, and family status.”

Ugh. Lynn thought. That’s already more personal than she was hoping for. But she swallowed her discomfort and continued. Eye on the prize she thought.

“Ask them what security measures are in place.”

“Good question, Lynn. It seems like they anticipated that. I have a full encryption schematic you can view on the main screen. It’s similar to the one you and I use to communicate: Two-stage blockchain with polynomial and fractal encryption. It’s not perfect, but the task of breaking it would require a dedicated government-level quantum super-computer running for 82.5 hours. The risk of a breech seems reasonable.”

“Agreed. Let’s go. But set a reminder to change our MENSA DEN credential password every 60 hours or so.”

“Smart precaution. Done. I’ll now open the secure link.”

Lynn proceeded to share her physical address, her gender identifier (her/her’s), her biological gender (female), and family status (living alone, no children).

Deep breath, she thought. I’m in.

#

“Okay Lynn, the MENSA DEN found seven offers for you to consider. I’ve posted them to your mobile screen. Where would you like to start?”

Hmm, Lynn thought.

That’s more options than she imagined there might be. Siri asked a good question. Where do you start on a journey like this? You’re selling a part of yourself to the highest bidder. “Social media” seemed like the easiest place. Fewer people share personal details on those sites, especially since Facebook imploded. Today, most people use any number of “Virtual Reality” or “VR” social networks to meet up with friends around the world. You have to pay to use most of those. What could they want? Lynn thought.

“Let’s start with social media. I’m interested in what they’re offering,” Lynn finally responded.

“Good choice, Lynn. The first is a scientist-specific VR meetup group. They were founded in Kuwait and have been trouble attracting female members. Your profile fits their criteria and they are willing to bid $12.50 per month for you to log in at least three times for 30 minutes each during the month.”

Lynn did the quick math. $12.50 for 90 minutes was less than $10.00 per hour. More to the point, it would take 33 years to make the $5,000 she needed. But perhaps there was other value to be had. Maybe she could build relationships with other scientists and collaborators along the way?

“Siri, go ahead and counteroffer with $30.00 per month, same time commitment.”

“Understood. I’m submitting the bid now.”

There’s no way they’ll…

“Response received. They countered with $25.00 per month for four sessions. They’ll pay the first month in advance.”

Better. Not great, but better. Lynn considered for a moment.

“Go ahead and accept that offer. Let’s keep looking.”

“Okay, let’s move on to an easy one,” Siri responded. “I have 15 businesses in your area that will provide discounts for dinners, events, and performances if you allow them to track your physical location whenever you get within 10 miles of their facility. I’ve added the list to your mobile screen along with a map overlay of your typical travel patterns. Only six of them overlap.”

Lynn examined the map. Siri was right. Six of the 15 were in her daily routine. She touched the screen in four places.

“Let’s go with these four,” Lynn decided.

“Confirmed. Where to next?”

Another good question. So far, Lynn realized she only accepted offers for $25 (per month, yes, but only $25 today) and four dinner coupons. Not so good.

“Siri, let’s re-sort the list from largest potential revenue to smallest.”

“Okay, I finished re-sorting your list. The largest opportunities are in the health information category. I’ve taken the liberty of cross-referencing the opportunities list with your private genetic workup. The results are on the main screen.”

Lynn looked up. Ah, there we go. Here’s the bigger money. She examined the details on the screen.

The first opportunity was a breast cancer clinical study based on her unique BRCA variant gene for $3,250. She would be part of a control group, meaning she wouldn’t have to do anything other than keep doing what she was doing. And as a bonus, she would get to read the resulting research.

The second opportunity was a pharmacological study on a synthetic cannabis derivative. This one was a “double-blind” study, meaning she would not know what she was getting, and neither would the researchers. There was a link to a 32-page disclosure and waiver document. They were offering $2,750.

The third was a biofeedback device that used light therapy to lower cholesterol levels. Since she inherited a gene that correlated with high-LDL levels from her mother, the researchers would double the normal payout of $750 to $1,500. She would need to use the device as directed (and tracked via an IoT connection) for three months and complete twice-monthly blood tests.

This was a tough decision. If she said “yes” to all of them, she would have all the money she needed…and more. But they weren’t created equal, and none would accept counter offers. It was a “take it or leave it” situation.

“Okay Siri,” Lynn said after a long minute. “Let’s accept the gene study and the biofeedback device. I’m not comfortable with the risks in the cannabis study.”

“Understood. The contracts are accepted. You will receive detailed instructions via a VR-mail later this week. Should I give the cannabis study authors the reason for your rejection?”

“Sure, tell them I’m not comfortable with the risks of not knowing what I’m getting. They could have been more clear, up front, on protections.”

“Understood. Feedback submitted. If they answer your questions, are you willing to reconsider?”

“No, I don’t think so. Mute their responses.”

“Will do.”

Over the course of the next 20 minutes, Lynn walked through a number of other auctions and offers. Siri knew Lynn was a “gig worker” and removed any explicit job offers disguised as information sharing. Lynn did consider one that was essentially a beta test of new lab software … but she had enough on her plate. She instructed Siri save that one for 30 days.

One interesting organization wanted her complete purchase history of all food and beverage products for the past 18 months. They offered $300, but Lynn negotiated the initial offer and closed the auction at $445. What the heck? It was just “food” and not “all purchases,” so the risk was low. And besides, they offered to share research findings with her that we personalized to her habits. She didn’t need to lose any weight, but she has been working on improving her muscle density. Who knows? Maybe she’ll learn something useful.

Three religious organizations wanted her to donate her information so they could better profile target members. She turned them all down.

The political organizations were a different story. The two major parties wanted free information (another “no”), but science-focused interest groups wanted her research notes to write up case studies to teach young people about the scientific method. They had a grant from the National Science Foundation, and they were offering $425 per unpublished lab book. She was under NDA with two of her five projects that qualified, but she accepted the others.

#

“Ok, Siri. Where are we at?” Lynn asked.

“I calculate $6,495 in total accepted contracts, with $25 per month continuing until you cancel the VR meetup group participation with the Kuwaiti-based organization. Do you want to continue and expand your search?”

“No, that’s all for now. Go ahead an exit the MENSA DEN, but remind me to check back in 90 days.”

“Will do. Signing off.”

Lynn felt a sign of relief. She had more than enough capital to begin her work – almost 50% more than she needed. She remembered the advice of a graduate advisor: Always assume your research will take twice as long and cost twice as much. If you do, you’ll be covered. She didn’t quite get to twice her initial figure, but she felt good.

“Ok Siri, let’s go shopping for lab equipment…”

#

Obviously, this is a thought experiment. Lynn Thomas isn’t a real person (yet). It’s not 2029 (yet). Privacy isn’t explicitly for sale in this way (just yet…or is it?).

I have a message for entrepreneurs reading this and wondering how the brokerage service could earn trillions of dollars as the secure intermediary in these transactions: Why aren’t you working on it?

I have a message for consumers reading this and wishing they could finance their dreams using assets they already own…but would be willing to sell under the right circumstances: Why wouldn’t you?

And finally, I have a message for all those tech leaders who feel that consumers will continue to give away private information for free because of your “unicorn” technologies: They won’t.

Lynn’s world is coming. It’s about time we all caught up.

#

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

#

Photo license obtained: Shutterstock

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

You don’t have a right to privacy. You have something better.

What if there was no right to privacy?

That question triggers a surge of righteous rage in many people, especially in the Western world. We rank “privacy” right up there with “free speech” and “freedom of worship.” But as we’ve seen (especially in the past 20 years of the information revolution), the notion of privacy has morphed into something more complicated.

To those who lived through the transition from pre-information to post-information eras, this new reality catches us off guard. In the 1980s, privacy was easy. You knew if you were anonymous. You chose to go public. But in 2019, privacy is challenging. Much of the time, you can’t tell if your actions are public or private – surveillance cameras, GPS trackers, and web tracking is so common that the average person could spend their entire day reading privacy policies and never understand half of it.

At the root of the anger is a contradiction: We want the benefits of modern technology without the intrusion to privacy they require. We don’t want our cars to know where we are … but we want GPS navigation. We want low health insurance rates … but we don’t want to share our dietary and exercise habits. We don’t want advertisers listening in to our conversations … but we want the best deals on products and services tailored precisely to us (without having to endure all of the other advertising).

To put it more simply, privacy is like celebrity – we want the kind of either that we can use when we want something, and then turn it off when we don’t. We want enough “celebrity” to get a good table at a busy restaurant … but not enough to get followed by paparazzi. We want enough “privacy” to keep our political beliefs to ourselves … but still get access to Facebook and Google.

Ask any true celebrity. You can’t have both.

It’s the same with privacy. There is no free lunch.

The issue is how we’ve defined privacy. Merriam Webster sums it up quite well:

privacy | ˈprīvəsē |

noun

  • the state or condition of being free from being observed or disturbed by other people: she returned to the privacy of her own home.
  • the state of being free from public attention: a law to restrict newspapers’ freedom to invade people’s privacy.

That definition didn’t burst forth from the earth fully formed. It has a basis in law in the United States. Although the word “privacy” appears nowhere in the US Constitution, federal and state privacy laws cover plenty of ground. We can categorize privacy into four main groups:

  • Intrusion of solitude: physical or electronic intrusion into one’s private quarters (usually, that means your home, but it can mean other private spaces as well, such as bathrooms and your car).
  • Public disclosure of private facts: the dissemination of truthful private information which a reasonable person would find objectionable (the modern practice of doxxing falls into this category, and it is illegal in some places).
  • False light: the publication of facts which place a person in a false light, even though the facts themselves may not be defamatory (libel and slander laws fall into this general area well, and it gets complicated).
  • Appropriation: the unauthorized use of a person’s name or likeness to obtain some benefits (aka impersonating someone else).

Many states build on federal statutes with their own, more restrictive, laws. Many of those state laws cover technological intrusions explicitly.

With GDPR, the European Union went even further, creating an entire legal framework specifically addressing a modern concept of privacy in a technologically-powered world. It’s a new set of rights and rules that apply to everyone in the EU (as well as a limited set of rights for everyone else).

In many other countries, almost the opposite situation exists. In many places, the concept of privacy is subsumed by the interests of the state. China comes to mind immediately, but it is hardly the only one. Those countries made the choice that benefits of total surveillance outweigh the desires of their population to keep to themselves.

But beyond the legal frameworks and philosophies, the concept of privacy varies by generation. People who lived before the information revolution see privacy differently than those born after it started. Younger people tend to accept the tradeoffs more readily, or at least they don’t think about the downsides quite so much until something very negative occurs (online bullying as an obvious example).

We have to wonder: If privacy can vary so much by law, by country, by culture, and by generation, logic holds that privacy cannot be a “natural” right.

If that’s true, whatever gave us the idea that we have a “right” to privacy?

 

Remember taxation without representation? Today, privacy is like exposure without consent.

Before the privacy equivalent of the Boston Tea Party breaks out, a (very) brief (and oversimplified) history lesson is in order.

The concept of “privacy” is a new idea from a historical context. Pre-agriculture hunter-gatherer bands never had privacy. They traded it for security of the group. The first cities weren’t much better. Rulers of those small enclaves knew who lived there and much of what went on for their own survival. It was only when cities became giants (in the latter half of the 19th century) was anonymity possible – and therefore – a modern concept of privacy could develop … and then only for the privileged.

But it wouldn’t last. The beginning of the 20th century saw the emergence of the “social contract” – older workers living off the resources of younger ones, universal health care (in some countries), and shared defense/sacrifice. Even then, while you may have needed some sort of government identification, you could (for the most part) live “off the grid,” even deep in the city. In fact, that was part of the appeal of “the big city” for many people. The more people who lived in a given area, the less likely you will be noticed (if you chose not to be).

That all changed with the advent of the internet and has been accelerating ever since. In some cases, we gave up our privacy willingly for greater social connection (Facebook comes to mind). In other cases, we gave up our privacy unwittingly for the implicit promise of better products and services (Google comes to mind). We can cite hundreds of other examples. But while there are definite downsides for this new era of interconnectedness, in most cases, we gave up our privacy for the better quality of life these technologies offered.

Here’s the catch: To function, the technologies require ever-increasing transparency. You can’t remain completely private and still retain all the benefits.

For only a brief window in recent history has there been any true concept of privacy based on the choice to remain anonymous. During that short time, we tasted privacy, we liked privacy, and now we feel that privacy is slipping away.

In other words, privacy as we have defined it and as we understand it is a myth. It’s our poor definition of privacy that sits at the root of our frustration.

It’s time we redefined it.

 

Privacy as a right versus privacy as an asset.

Let’s consider a new definition of privacy as an asset:

asset | ˈaset |

noun

  • a useful or valuable thing, person, or quality: quick reflexes were his chief asset | the school is an asset to the community.
  • (usually assets) property owned by a person or company, regarded as having value and available to meet debts, commitments, or legacies: growth in net assets | [as modifier] : debiting the asset account.

What happens when we do that? Let’s highlight the key differences:

  1. Privacy as a “right” – the state or condition of being free from being observed or disturbed
  2. Privacy as an “asset” – a useful or valuable thing, person, or quality

Do you notice something about the first definition? As a “right,” privacy is something others grant us as individuals. It is the “condition of being free from intrusion.” Do you notice something about the second definition? Privacy is a thing of value that you own. It is a “useful or valuable thing.”

That simple shift makes all the difference.

Our new definition transforms privacy from something others control (they choose not to intrude on us) to something you control (you choose to protect your asset). This may seem like a trivial distinction, but it’s not.

Privacy is still all about choice. It’s simply a matter of whose choice. Shouldn’t it be you?

It may seem odd to think of privacy that way at first: You can’t own a bushel of privacies. There is no stock market for privacy securities. You can’t pay my mortgage with my privacy account. But that’s because we’re confining the definition of an asset as something tangible. But assets are not simply physical objects. The real value of privacy in the information age is information itself. That’s all privacy is – an information asset.

When we begin to think about privacy as an information asset, we see immediately a number of benefits:

  1. Instead of an abstract right, privacy as an information asset has measurable value. In other words, we can convert privacy into information that could be sold, traded, or invested.
  2. The act of quantifying our privacy and organizing it into categories illuminates its value. In other words, privacy is a set of assets available for your personal exploitation and benefit.
  3. Because privacy is a quantified asset, it’s also divisible. That means there’s more to privacy than “all or nothing.” You can choose some information to remain private, some to share, and some to sell or invest.

What does that mean in a real situation? You can decide to give away your private information to use Google Maps or Alexa. You can weigh the pros and cons. The choice not to use one of these services may be difficult or costly, but it is your choice.

 

Your privacy information asset portfolio.

At this point, many people are confused. That’s natural. Yes, we follow the argument: (1) privacy is a modern creation; (2) privacy (as we know it) is eroding quickly in the face of technological innovation; and (3) it is more useful to think of privacy as an information asset rather than some sort of inalienable right.

The rational argument isn’t the confusion – the implication of redefining privacy is unclear. In other words, how do we manage privacy in our day to day lives?

Privacy is unlike other assets. Sometimes, it is quantifiable like money (e.g. your credit score information), but often it is not (e.g. the value of your religious affiliation). Sometimes privacy exists on a spectrum (you can share a little personal information on Facebook, but not everything), but often it is a binary choice (you have shared your location data, or you haven’t).

The confusion is natural.

Information is such a new type of asset that we can be forgiven for wondering how to think about it. Each type of data becomes part of your privacy information asset portfolio. You get to choose how to invest your assets to achieve your objectives. But to invest with confidence, we need clarity on the assets in our portfolio. Let’s explore those assets and how you might decide your allocation strategy:

Social Data

Social data is an easy start. If you use Facebook (and most people at least have a profile), you’ve shared at least some social data. In return, those services provide a way for you to stay connected with family and friends. If they’re free services (and most are), your privacy assets are the product you’re selling in return for those services. If you’ve ever felt like you don’t get much in return for social networking, what you should start saying is I am paying too much for this. Remember, just because you’re not exchanging money, doesn’t mean you’re not exchanging value. Consider switching to a paid social network such as Premo Social. I know people who’ve done it. The modest cost of those services allows you to retain additional privacy, and in effect, “pay” less.

Location Data

This is another easy one…especially in the past ten years. Most (if not all) modern cars have GPS trackers. That technology allows automakers to offer emergency services and car rental agencies the ability to track their car after you rent it. Many also feature built-in navigation systems. All modern smartphones have the same GPS location functions, allowing Apple, Google and others to offer driving, transit, and walking directions to wherever you want to go (not to mention to share that data with other apps). These functions are so common, that you can be found by someone almost anywhere you go. Consider learning how to turn off location services when you don’t want to be tracked. Practicing this habit will force providers to ask you to turn them on and make you aware of just how often your location is being shared. If they want your information, they should make a compelling offer of value. If not, just say no.

Purchase Data

If you’re like most people, you make a lot of purchases from a lot of different providers. Who has that information? Banks, sure. Credit cards, them too. Amazon, yes, but less than you think. How about your corner market, Uber, or Amtrak? You may use a combination of credit cards, checks, online bill-pay, cash, and gift cards. Today’s reality is that no one provider knows your entire purchase history, only you do. Services such as Mint are trying to give you greater visibility in your spending by aggregating as many of these different sources as possible. Even if you don’t sign up for one of these services, it’s worth understanding how they work and the value the bring. When one of them will offer to pay you for your data (instead of offering the service for “free”), you’ll be ready to decide.

Financial/Credit Data

Here’s the basic idea behind the credit rating agencies: You’re trading this aggregation of data for the ability to maintain a “credit score.” You can opt out in many cases (or pay in full, in cash, immediately, for absolutely everything), but a credit score is the inevitable consequence of living in a modern economy. (It’s also useful for borrowing money when you need it.) Do you think about your private credit history as an asset to be managed? You should. Frankly, it’s more constructive than feeling powerless when they make a mistake. You wouldn’t let your bank misplace half your paycheck without making a phone call, would you? Well, have you checked your credit report (for free)? You probably should.

Health Data and Biometrics

This is a bigger category than you may realize. Yes, health data includes your medical records (test results, family history, doctor visits, etc.), but it also the biometric data captured by your Fitbit, Apple Watch or smartphone (number of steps, diet choices, blood pressure, heart rate, etc.) In the future, and in some cases today, you will be able to take advantage of your good habits to negotiate lower insurance rates or sell this information to medical innovators. That’s especially valuable if you have got an odd genetic trait or family history. But until there are better protections in place, be careful about sending away for a “low cost” or “free” genetic screening. In the meantime, you can consider signing up for paid pharmaceutical and medical device trials.

Image, Video, and Voice

Pictures of you (or pictures you take), videos of you (or videos you take), and even the sound of your voice have much more value than you realize. Those photos and videos have value. Instead of a free social network, why not post them to a photo/video sharing network where you could earn some money? Voice is the next generation of human-computer interface, and Silicon Valley is racing to get better at this. They’re being coy about telling you just how much they’re collecting and analyzing because they’re hoping you’ll give it to them for free or for the “use of their product.” Make them give you more for it.

Employment

LinkedIn gets your detailed career history and job-hunting desires for free (are you seeing a pattern here yet?) But with more people become “remote,” “virtual,” or “gig workers,” the traditional linear career path will cease to exist. Your job history is more than a series of employers. Your career successes are simply another series of information asset – the entirety of which only you know. Gig job markets may give you a better idea of your true value than a salary benchmark website such as PayScale.com.

Political and Religious Affiliations

Of all the types of private information people have, political and religious information is also the type we’re most likely to give away for free. It may seem counterintuitive, or downright wrong, to think of these pieces of information as “assets,” but bear with me. Don’t think about them in terms of money, think in terms of value exchange. Is it worth it to you to support a political cause? And worth the risk of someone not being your friend because they know that? Then by all means, share that information. The same goes with your faith, although in a more complex context depending on the creed.

 

Defining privacy as an asset demands being intentional with your choices.

That word intentional is critical. When we think of “rights” we think of something we were born with – that’s where the word birthright comes from. We value rights, but mostly in an abstract sense, and often not unless we’re threatened with losing one.

By contrast, when we think of “assets” we think of something we acquire, earn, and use for our own benefit. If we don’t, we’re being wasteful. That waste can translate into actual money, yes, but we also can waste our relationships, or time, or our happiness.

In the modern world, no matter what Google or Facebook may tell you, there is no free technology. There is always an exchange of value. Most of the time, your privacy is the most valuable asset in the equation.

But now, you should realize that you are in complete control. You simply need to take it.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

Categories
Audience Empowerment Information Management Long Form Articles Marketing Ethics Rehumanizing Consumerism

In America, your digital freedoms are what the tech companies say they are.

What do you really know about how organizations protect your private information?

Perhaps you don’t think about it that much. Your data has become such a commonly-traded commodity that most people couldn’t make it through an average day without giving their private information to at least a dozen organizations.

Doubt me?

Let’s examine a simple daily routine. I’ll bet I can count at least 12 times you gave away your private data in return for a product or a service – perhaps many times, without realizing it.

  1. You told your voice-enabled Echo to set an alarm for you to wake up 15 minutes early. You just told Amazon when you’re awake (and ready to receive advertising offers).
  2. Over breakfast, you check your “work” email account. You just told your company’s IT department that you’re on the clock.
  3. You decide to take public transit into work, scanning your transit card when you board the bus. You just told the transit authorities you’re a passenger today.
  4. You use your Starbucks card to buy coffee. You told Starbucks what you ordered, and how that’s the same thing you ordered each day for the past week. Perhaps you’re ready for something different?
  5. Oh, by the way, your Starbucks card is loaded on your Google Pay app. Now Google knows your coffee habit as well.
  6. You scan your work ID badge when you enter your building. Now your boss knows you’re on site…and that you’re a few minutes later than usual.
  7. You use a company credit card for lunch. You told the credit card company (and your employer because it’s a corporate card) that you ordered the fish and chips instead of the salad. (Your health benefits administrator might catch a glimpse of that choice as well.)
  8. You spent 15 minutes on your LinkedIn app scrolling through job postings. LinkedIn knows you’re open for new job opportunities…and if you used the company’s WiFi, so does your boss.
  9. You worked late (which your employer knows, by the way, because of your exit badge scan) and missed your bus. You decide to take an Uber. Now Uber knows where you live and work.
  10. At home, you log into Facebook before dinner and post a photo of you and a bottle of wine. That’s the fourth “wine photo” this week. You’ve just told Facebook’s algorithm that you might have a drinking problem. In the meantime, you’re likely to see more alcohol advertising.
  11. You decide you can’t find anything at home to eat and get in your car. Most modern cars are equipped with GPS tracking. If you happen to get into an accident because you were impaired, the car can notify authorities…and if a judge okay’s it, they might also look at those Facebook “wine” posts.
  12. But let’s assume you’re back home safely and launch Netflix. Now Netflix knows that you spend 2.75 hours per day (on average) watching television.

I could go on, but I think you get the idea. Most people think the only time their “private” data moves around is when they run their credit card. Perhaps they also realize that their smartphone tracks location data. But few people stop to think about the vast and complex digital trail they leave behind every day of their modern lives.

Put more crudely: the story of most people’s digital lives reads like a scandalous tale of unprotected, anonymous sex with as many partners as possible.

 

Your companion on every step of the digital trail

In the (limited) example above, we learned we share of private data with many more organizations than we might have thought. When we share our data, we trust those organizations to use our private information for lawful purposes and deliver what they promised us. Trust is the key word. Let’s ask ourselves some questions:

  • Do I trust Amazon to send me advertising? Probably, yes. That’s what I signed up for when I bought the device, and even if I don’t think about it much, I know that’s part of the deal. But do I also trust Amazon with my sleep schedule?
  • Do I trust my employer with my email habits, arrival/departure times, web browsing history, and credit card expenses? Yes, I suppose I need to. Those are conditions of employment, and they seem reasonable. But do I trust them not to share my dietary choices during lunch with my healthcare insurer?
  • Do I trust Google (and Starbucks) with my financial information? They aren’t banks, although we often treat them like one.
  • Do I trust Facebook (and Toyota) not to share private social media posts with law enforcement? How well do you know what is “legal” where you live?

Those are hard questions with few easy answers.

For one day, I invite you to write down each time you leave a “digital footprint” – as well as the organization(s) you are trusting with that information. If your situation is anything like the hypothetical example above, you might be surprised how many organizations you’re trusting to protect your interests.

Perhaps you cringed if you wrote down “Yahoo” or “Target” or “The Home Depot.” Here’s the other time people tend to think about organizational data practices: After a breach.

How many millions of Yahoo email addresses (and passwords) were stolen? What about Target? Home Depot? Data breaches have become so common that they blend into the background. Unless your personal financial data was stolen and you were the victim of identity theft, data privacy is sort of like life insurance: You don’t want to think about it, and you sure hope you don’t need to use it.

But unless you are one of the few people who work in the “information” industry (IT analysts, server administrators, data scientists, basically all of modern marketing, etc.) you need to admit that you don’t know how organizations handle your data. You may have suspicions – you may even be a bit jaded – but you don’t have hard facts to answer for yourself if those organizations deserve your trust.

That’s about to change.

The era of data privacy ignorance is over, and we have GDPR to thank for it. After I’m done helping you understand the European regulation, and what we’ve learned in the past seven (or so) months, you may not sleep as well.

Or to use continue my crude analogy of data hygiene habits from earlier in the piece, you may start to use “protection.”

 

Now more than ever, it’s important that all of us understand what GDPR really is.

The most important consumer protection milestone since Ralph Nadar’s 1965 auto industry exposé Unsafe At Any Speed came and went without much fanfare on May 25, 2018.

The formal name in the European Union is the General Data Protection Regulation, but it’s most commonly known as GDPR. Yes, it generated a blip of attention across the pond, but as with most things that aren’t born in the United States, Americans didn’t pay much attention. Nor did the rest of the world. Thousands of organizations, including Google, Facebook, Amazon, and Apple, all updated their privacy policies. Most of us simply clicked “accept.”

That was a mistake.

Without diving into the bureaucratic language, GDPR is a set of privacy protections for EU citizens. But it’s much more than that. GDPR is a new set of property rights—rights over the data created by all people as they walk through their digital lives: purchase records, locations they visit, surveillance of them, everything.

Specifically, GDPR guarantees:

  1. the right to access your personal data (companies cannot hide it from you);
  2. the right to own your personal data (you can request it, a processed called “rectification” … and then take it to some other provider);
  3. the right to restrict how your data may be used, and most importantly,
  4. the right to be forgotten (you can ask to be purged from the data gatherer’s records).

GDPR says that you are more than a collection of data.

GDPR is no less than a statement of basic human dignity.

There’s more to it than that, and the more you learn about the specifics, the easier it is to get lost in the technicalities. For our purposes, let’s see how GDPR works in practice.

Suppose you’re interested in a London production of Hamilton, and purchase tickets online from the theater’s website. On the day of the event, you leave your hotel (that you also booked online) and ride an Uber to the theater. Along the way, you are captured on no fewer than three surveillance cameras in the theater complex. You purchase a drink with your credit card, watch the show, and head back to the hotel after a thrilling performance.

If you had done that in New York, as an American citizen, you’ve given no fewer than five organizations (the hotel, Uber, the theater, the concession vendor, and the credit card company) your private information. They can use it, into perpetuity, for whatever purpose they like—usually to remarket other goods and services to you.

(Have you ever escaped one of these mailing lists? I thought not.)

But under GDPR, Londoners have a choice. With one email to each vendor, they can ask to purge all of that data. It would be as if they never attended the show. I’m oversimplifying, of course, especially as it relates to the financial transactions, but let’s pause to think about what a massive change this is. For the first time since the beginning of the internet and the creation of your digital footprint, EU citizens (and to an extent, anyone an EU-based organization touches) have control over a new type of property—their data. Organizations and marketers now must inform them, respect their rights, and up their game if they want the right to use that asset. And because EU citizens cross borders, and because the EU will take action against violators outside its borders, global organizations are forced to comply. In other words, London citizens can ask the New York vendors to purge their data, and those US-based companies will need to oblige them.

(As an aside, I find it ironic that a Brit has more freedom regarding their data than an American going to see a play about a key figure in the American Revolutionary War. But I digress.)

Up to this point, privacy and “data ownership” has been a one-sided battle. Your data freedoms are what data gatherers decide they are. The EU just gave its citizens the data equivalent of the Magna Carta.

 

What does GDPR tell us about how well organizations handle our data?

Until GDPR passed, we didn’t really know how well organizations handled private data, we could only guess. Now that we can get hard data, I think it’s fair to ask ourselves how well have EU (and global) organizations implemented the changes in data practices and transparency at the heart of GDPR?

Here is the simple answer: Not well.

(Fair warning: What follows is about to get wonky. I’ll do my best John Oliver impression to make what follows interesting and relevant to all of us. But I don’t have a team of joke writers and graphic artists. You’ll have to make do.)

Let’s talk first about compliance. One of the primary enforcement vehicles you have (and by “you” I mean EU citizens) is what’s called a “Subject Action Request,” or SAR, for short. Basically, it says that you can request that any organization holding your data return it to you within 30 days after they receive your formal request. That process for making that request must be easy to find on your website and easy to complete.

Because of that formal process, journalists have been able to test the process. Researchers have been able to collect sufficient quantitative data. In other words, we’re not guessing any longer.

According to one study completed by 451 Research:

  • Only 35% of EU-based companies complied to SARs within the 30-day timeline (Here’s a handy tip: when you look at percentages, always read them the opposite way they are stated. You’ll likely learn something interesting. When we do it here, this means a majority of companies, some 65%, did not comply within 30 days.)
  • About 50% of non-EU based companies complied on the same test (Really? I wouldn’t have guessed that. I love it when research surprises me.)
  • Retailers perform the worst; 76% failed the test (Remember our opposite trick? Only one in four retailers takes respecting your privacy seriously enough to comply with the law.)
  • Financial service firms are some of the best; “only” 50% failed (I worked for a bank; those folks are wound tight. But remember, the “best” is still a failure rate equal to a random coin flip.)
  • The National Pharmacy Association (UK) found a huge spike in patient data breaches after GDPR implementation. In fact, one of the largest fines levied against a GDPR violator was the Portuguese hospital Centro Hospitalar Barreiro Montijo (CHBM). In two separate violations, regulators assessed €400,000 in fines. Financial identity theft will be nothing compared with genetic identity theft. I’d think twice (or three, or four times) about sending away for one of those genetic tests.

Their research also found that while these organizations generally understand the impact and need for GDPR, actual compliance rates are a better measure of leadership priorities. In other words, believe what they do, not what they say. From the basic statistics above, it should come as no surprise that most global firms would fail a GDPR audit.

Let’s make the point simpler: When you interact with most organizations through the course of your day, they are demonstrably not committed to your privacy. They are committed to their goals.

 

Hey wait! That’s not fair!

Large organizations are quick to point out that given the amount of data created compared to the number of violations that occur, they are doing quite well handling your data.

It’s a “reasonable” point of view.

Let’s run a simple thought experiment using our hypothetical person as a guide. This person created a sample of 12 “steps” in a digital “footprint” throughout the day. (The actual number could be much higher, but let’s keep the number conservative.) On planet Earth today, there live roughly 7 billion people, about half of which lead “digital” lives. Let’s use another conservative number – 3 billion digitally-connected people – and multiply that by the 12 data points in each person’s digital footprint. That’s 36 billion data points per day, or over 13 trillion data points in a given year. That’s not the real number, of course (the real one is much higher), but it illustrates the scale of the data management challenge.

If you consider the number of “mistakes” (breaches, mishandling of data, improper access, etc.) divided against the total number of data points, the proportion of privacy violations is vanishingly small. More than that, they argue that given enough time, organizations will adjust to the new reality of GDPR (at least in the EU), and these incidents will become even less common. C’mon. It’s only been seven months. They’ll get better, right?

I’m suspicious for three reasons.

  1. First, it’s not as if GDPR emerged from nowhere. Global organizations had months to prepare for the law’s passage. Since May 2018, they have had more than six months to make adjustments.
  2. Second, the breaches reported are only the breaches we see, not all the breaches there are. Ask any security expert, and they will tell you that the average consumer doesn’t see most of what happens. That’s by design (it’s embarrassing) and by fatigue (if they told you everything in technical detail, you’d stop listening).
  1. Third, large organizational data “scientists” misunderstand the perception of risks involved. To them, an error rate of 0.0001% is so small as to be insignificant. They call people who worry about breaches “foolish” and “irrational,” rolling their eyes at the tiny chance something might happen as a result of a breach. I would argue there is nothing irrational about fearing an outcome that may be unlikely, but would be catastrophic if it were to occur. Identity theft (and genetic theft) both fall into that category. (For more, I would encourage those “scientists” to reread The Black Swan and anything by Kahneman and Tversky.)

People worried about privacy breaches are not irrational, but we are being taken for fools.

 

How to not be a taken for a fool (anymore).

If you are a modern individual, taking advantage of the bounty of technological wonders that make your life easier, your privacy is an illusion. All of your data is available. You gave it away (in most cases, for free). You are relying on the good intentions of these organizations not to take advantage of you. You’re also relying on those same organizations to protect that data from others with lesser intentions. They are clearly failing. We are clearly fools.

If the results of GDPR audits are any indication, you may not have much time to make changes in your “data hygiene” before you begin to experience negative consequences of a hack or other intrusion. Every time you engage in digital behavior, you’re rolling the dice. Snake eyes might be rare, but they happen. But it’s not realistic for most of us to go “off the grid” and completely sever our ties to the digital world.

We need a realistic answer, and we have one: Decentralization.

The saving grace (for those of us outside China and a few other countries) is that no one organization has more than a sliver of your data. Amazon may have some purchase history, but not all of it. Apple may have information about your app use. Netflix understands your television habits. Your health clinic has some biological data. Google knows where you’ve (physically) been. Toyota knows how you drive. You can’t hide your “adult movie” habit from Firefox.

Many of these organizations wish you would centralize more of your activities. They receive a “greater share of wallet” from each consumer. You (presumably) receive greater incentives and benefits. It’s like the practice of insurance bundling on steroids. But I think you now can see the risks of having all your digital eggs in one basket.

The privacy of any one aspect of your life might be a myth, but only you know the entire picture. Let’s explore some practical steps you can take to keep it that way:

  • Take steps to keep your digital life compartmentalized. If you use an Apple phone, use a Google web browser. Don’t store your health records on your Android phone. Don’t share browser data between devices.
  • Don’t use single login services (such as “login with Facebook”). Yes, it’s easier. And yes, you created a backdoor for Facebook … as well as anyone who hacks your account.
  • Take extreme care before sending away for a genetic test from anyone other than a large, established, medical institution. And if you do, pick one that is not your primary clinic.
  • Learn how to turn off location services, facial recognition, and listening services (Alexa, Siri, Cortana, etc.) when they are not in use.
  • Split your financial life into more than one institution. For example, don’t use a credit card from the same bank the holds your checking account.
  • If you live in the European Union, learn how to file a GDPR request. Here’s a link with some tips.

It seems to me organizations are in a precarious position. If they come clean with their data management practices (and show their warts), they risk a negative perception in the marketplace versus those organizations who choose to be less transparent. But those who choose to be opaque risk catastrophic breaches of trust when the inevitable occurs. It’s a lose-lose.

That’s why I am tempted to advocate for a wider adoption of GDPR-style legislation, worldwide, to level the playing field. In lieu of that, I think there is a market opportunity for white hat hackers to expose privacy violations and issue “trust ratings” alongside “consumer ratings” on every website. (Will organizations pay for that? If they’re doing well, yeah, probably.)

Until that day comes, it may seem like these efforts are an extreme form of paranoia, but for anyone who has suffered identity theft, they are sensible and reasonable. Think of decentralization the same way submarine designers think about sealable bulkheads. If one compartment springs a leak, it doesn’t sink the entire ship.

But more to the point, because you are the only one who holds all the cards, you have power. No “one” can be trusted with your all of your data, but perhaps “every” one can be trusted with just a little of your data – at least until we have better safeguards.

###

A special note: Lorenza Maria Villa, an Italy-based GDPR Consultant & Data Protection Officer, was kind enough to review a draft of this article and provide feedback. I am in her debt. Grazie!

###

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

##

Source notes for this article:

IT Pro (UK)

I’ve embedded most of the links in the article itself, but I found myself continually referring to this UK site for a comprehensive run-down of GDPR news. If you’re an IT professional, I’d keep a close eye on their aggregation. They provide helpful links to the original reporting as well as concise summaries of the implications.

Let me put it a different way: Because the “carrots” aren’t working, the EU is bringing out the data privacy “sticks.” That means violators are getting fined. Don’t think you’ll get found out? Well, tell that to the lawyers teaming up with artificial intelligence software to develop automated scanners of privacy policies on your website. I would bet money the nastygrams are on their way.

If you’re a consumer, IT Pro will give you a sense for what’s going on in non-technical language. Fair warning: You may not like it.

Categories
Audience Empowerment Information Management Long Form Articles

“Alexa, play some music” isn’t the only time Amazon is listening to you.

Amazon’s voice recognition software only listens when you say the word “Alexa,” right?

That’s what most Echo and Dot buyers think because that’s what the advertising leads you to believe. As if by magic, your Alexa-enabled device “wakes up” when you say its name. But think about that for a moment. After you say the magic word, your Alexa-enabled device must listen for your request, interpret it, and respond. Just how much does Amazon really listen to inside your home? How much you really know about how voice technology works when you unboxed your Alexa-enabled device?

(Fair warning: this is about to get awkward.)

You may have assumed your Echo or Dot listened and responded using the small computer housed inside the device itself. But that doesn’t make sense. The on-board computer simply isn’t powerful enough. And besides, Amazon continues to update the device. It must do this from a centralized server location. That’s the only place where there is enough computing power not only to interpret your request, but also to update Alexa with new “skills” from third-party vendors. That’s how your device now knows how to order a pizza. Amazon needed to partner with Domino’s Pizza (in the United States) to develop that interface.

Now that you know that your voice recordings are being sent via the internet to a centralized location, you may have assumed Amazon will need to store that data for some period of time – for example, to use its Natural Language Processing algorithms to interpret your request for a weather report (or to buy a pizza), gather that information, and then send it back to your device for it to speak the response. The transaction happens so quickly that you assume Amazon would have no reason to keep the recording of your voice any longer than a few seconds. Besides, is that even feasible? Think of how much storage space Amazon would require for all of the audio files. Is there really a database somewhere storing all your “requests for weather reports?”

Those are good questions.

Imagine for a moment that you were curious about what, precisely, your Amazon Echo or Dot device recorded in your home. Now that you know it’s listening, you’d like to know what it heard. To satisfy that curiosity and put your mind at ease, you ask Amazon to send you a copy of the data your device has collected since you bought it.

After a few weeks, you receive your audio files from Amazon. Imagine your horror as you open the attachments and begin listening to the recordings: A discussion of what to have for dinner, two children arguing over a toy, a woman talking to her partner as she gets into the shower. You weren’t really sure if Amazon would keep recordings at all. And if they did keep recordings, you thought your Echo or Dot recorded only your explicit requests.

But it gets worse. You don’t recognize any of the voices. With equal parts relief and horror, you realize you are listening to someone else’s Echo recordings!

 

As it turns out, all of your assumptions about voice technology were wrong.

This story isn’t a thought experiment. It is precisely what happened when a German citizen who requested his data files from Amazon under the European Union’s GDPR regulation. He expected to get a list of the products he has purchased, how he paid, and other commercial profile data Amazon compiled. Unlike my scenario, he wasn’t expecting audio recordings. He didn’t own an Alexa-enabled device. He shouldn’t have been getting any recordings, yet there they were.

According to the story originally reported by the German investigative magazine c’t, Amazon admitted the mistake, citing human error in sending him the wrong file.

(The statement fails to mention if the company notified the person whose data was shared. Also, Amazon was only compelled to comply with the request for data because the requestor was a European Union citizen. If you’re an American, or from anywhere outside the EU, good luck.)

In case any of the impact of the story escaped your notice, let’s take a moment to summarize what this all means in simple terms, shall we?

  1. Your Alexa-enabled device listens to you more than you think it does.
  2. Your Alexa-enabled device not only listens to you, but it is also records those sounds.
  3. Your Alexa-enabled device sends those recordings to an Amazon data center, where they not only use natural language processing algorithms to decode your speech and complete your request, but they also store those files in a centralized database for future use.
  4. At that data center, Amazon ­– one of the best data management companies on the planet ­– has a human process to respond to your data request.
  5. As the investigative reporting shows, this human process is prone to error.

To put it in even simpler terms, if you own an Amazon Alexa-enabled device, Jeff Bezos could be the least creepy person listening to you right now.

Are you okay trading your privacy in your home for a weather report?

Or asked a different way: Is that weather report worth someone at Amazon listening to:

  • an argument with your spouse?
  • your kids playing?
  • a “tough” visit to the bathroom?
  • you and your partner having sex?

Are you okay with a random person (who received your data file by mistake) listening to that? Are you okay with a hacker listening to that? Your health insurance company? The police?

I used to believe this was a “boogieman” issue – that worst-case scenarios like the one described didn’t really happen. I used to believe people who rang the warning bell were at best, premature fools, and at worst, fear-mongering opportunists. I used to believe those things, but I was wrong.

The European Union’s 2018 GDPR consumer protection law cast a light under the bed and showed us all that the boogieman is real. And he’s listening to you right now.

 

The tyranny of menus and why is “voice” such a big deal.

To understand why companies are investing so much in voice recognition technology, and why they risk invading your privacy, you have to understand how objectively poor today’s “digital” experience is and how it got that way.

Voice is the natural way humans interact with others and their environment. But in the early days of the internet, interactive voice technology was neither advanced enough nor cheap enough to use outside of a few advanced laboratories. The most cost-effective voice technologies of the day were “telephone menu tree” systems that infuriated even the most patient callers.

If a “natural” interface wasn’t ready for the birth of the internet, what was the next best alternative?

Cascading menus.

Borrowed from library science, the menu structure is a software engineer’s dream. It’s logical, orderly, and hierarchical. Unfortunately, menus are not how people naturally interact with information. Menus do not mimic how our brains work. Menus are not easy to use.

Menus are terrible user interfaces for most everyday functions.

As just one example, think about this simple use case: I would like to play Prince’s “1999” on my iPhone. Here are the menu-driven steps I can take:

  1. Unlock the home screen (if I have not authorized biometrics, I need to input a passcode).
  2. Tap the iTunes app to open it.
  3. Tap the “Artists” list.
  4. Scroll to “Price” and tap the artist name.
  5. Scroll to “1999” and tap the song name.
  6. Adjust the volume as needed.

Six steps. Multiple taps and scrolls. Complex, artificial, robotic.

Or, consider this voice-based alternative:

“Siri, play Prince’s 1999.”

Four words. One voice command step. Simple, natural, intuitive.

Menus are so common, we almost forget how unnatural they are. Menus don’t only dominate the user interface of smartphones, computers, tablets, and websites, but we find them everywhere – kiosks, airport terminals, medical devices, automobiles, and home appliances.

Think about it: That infuriating menu in your Toyota Camry, your CPAP machine, or your GE refrigerator is an ugly holdover from the early days of GopherNet and ARPANET … just like the QWERTY keyboard is an ugly holdover from the early days of IBM typewriters.

That’s why voice is such a big deal.

Menu interactions may be behavioral (and in many ways superior to opinion-based evidence), but they are still untethered to our true thought processes. Voice interactions are different – and not the type of robotic voice commands you give your car; those are simply audio menus, and they are terrible – no, the true potential of voice is unlocked with Natural Language Processing algorithms that learn to interpret and respond to natural human speech patterns. The best of them are learning our cadence, pitch, tone, accent, and volume ­– and most importantly, our intent.

In a menu-driven world, our devices aren’t listening to us, they are waiting for an input. However, when a device is listening, it doesn’t need to wait to respond. It can make suggestions to you in real time, just as another person would do in a conversation. That’s the quantum leap voice technology promises: For the first time in human history, machines can truly interact with us.

But as we’ve seen, that’s not how people think voice technology works. Because we are so used to machines waiting for our commands, we’re not conscious that many of them are now listening to us go about our daily lives.

 

I’m not sure “voice” can be trusted. Yet.

Contrary to the image created by advertising of a fully conversational human-computer interface (a la the Star Trek “computer” or “J.A.R.V.I.S.” from Marvel’s Iron Man), if you try to hold a “conversation” today with Alexa, Cortana, Siri, or Google, you will be disappointed.

Most people who use voice technology quickly learn its limitations and adjust their expectations. In fact, most people use Alexa-enabled devices to tell give them weather reports or to play on-demand music. That’s it.

But if voice technology is to improve, its developers need to listen to and analyze many more interactions. Their argument for listening is simple: As consumers get better at interacting with voice technology, the technology will learn and improve. As the technology improves, consumers will expand their use of it. It’s a positive feedback loop that will (eventually) give birth to a real “J.A.R.V.I.S.” And when that happens, you’ll love it.

Perhaps. But until that day comes, you’re giving up your privacy for a weather report.

At this point, it’s fair to argue that we’ve given up our “privacy” for all manner of technological benefits and services. True, but up to this point those technologies operated on your explicit command. No one forces you to use Google Maps. No one forces you to share personal details on Facebook. No one forces you to buy from Amazon.

But voice is different.

Voice is a form of biometric data – something that is uniquely yours. Additionally, voice technology invades your privacy in an insidious way, always listening, always recording, and always learning more. You can see why organizations want voice analysis so desperately. It’s finally able to break into your “inner self” versus relying on your opinions or waiting for your command.

Voice technology is the ultimate behavioral study that you didn’t realize was happening.

Here’s the bottom line: Until organizations demonstrate they can be trusted with our private data, I’m not sure they deserve to have us give it to them for free. What’s more, they are unlikely to stop collecting your data on their own. As we’ve discussed, they need that data to improve their voice technology, and you’re willingly giving it to them. Why would they stop? They simply hope you aren’t paying attention.

It’s time that changed.

Here are a few easy things you can do today to start you on the path to reasserting the privacy in your own home:

  • Think hard about whether a voice-enabled device is right for you. That includes products from Amazon, Google, Apple, Microsoft, and others. Honestly, I don’t care if you choose to one or not. Just don’t think it’s not listening to you pee. It is.
  • If you do choose to use a voice-enabled device in your home, understand that your home conversations are no longer private. Consider that every statement you make inside the comfort of your home could have the potential to end up in the hands of advertisers, your government, the police, or on Google.
  • Think twice about connecting your voice-enabled device to home automation and security systems. “Smart home” technology is a known source for hacks and privacy intrusions.
  • Search out and read privacy statements before you purchase a voice-enabled device. I’m not saying, “don’t buy it,” I am simply saying, “know what you’re buying.”
  • If you happen to live in the European Union, learn how to request your voice data file. It’s easy. Here’s how.
  • If you are in the United States, send a message to your representative and ask for their stand on privacy issues. That’s easy too. Here’s how.
  • I could go on how many other countries. You get the idea. The notable exception is China. They think about privacy differently.
  • Last, but not least, learn how to turn off listening when you don’t want to be heard.

Sorry, tech companies will not protect your privacy out of the goodness of their heart. It is up to you, as the consumer, to take action.

Your voice is yours. Keep it that way.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

 

Categories
Information Management Long Form Articles Rehumanizing Consumerism

A Fun Parable About Leprechauns and Information Manipulation

Ask most people what comes to mind when they hear the words “information manipulation” and you’ll likely get only one response: Censorship. While certainly a form of information manipulation, it is hardly the only one. It’s not even the most effective technique. Censorship’s two cousins—information friction and information flooding—are much more common and vastly more effective. In this article, we’ll travel to China to learn how both information friction and information flooding help the government manage its sprawling bureaucracy. Then we’ll hop a plane back to the United States to see how both techniques are at work in our culture as well. Finally, we will examine the information professionals’ responsibility to recognize information friction and information flooding at work against (or in) their organizations.

##

Information manipulation is a provocative topic. It stirs strong emotions—closing our minds to the underlying methods before we have a chance to discover how it works. That’s unfortunate. Unless we understand information manipulation, we cannot address it. To help explore the issues at play without triggering our natural defense mechanisms, I’ll start with Linda Shute’s version of the story of Clever Tom and the Leprechaun (Scholastic, 1988).

Once upon a time…

…Clever Tom found himself walking in the meadow by his home in rural Ireland when he came across a leprechaun propped up against a fencepost fast asleep. Tom couldn’t believe his eyes! His grandparents had told him stories about the fairies, but he assumed they were just fairy tales, not actual fairies. But this was one in the flesh—an honest to goodness leprechaun!

He knew what that meant. If he could capture the leprechaun, the fairy creature would be obligated to lead him to a buried treasure. For a poor farm boy, this was the chance of a lifetime. Tom seized the opportunity…and the leprechaun. (The leprechaun was sleeping after all. It wasn’t that hard.)

Startled awake, the leprechaun immediately understood his mistake. Sighing, he agreed to lead Tom deep into the forest to the tree, under which, a treasure was buried. Tom was overjoyed. This is what he had always waited for! Tom could finally leave the farm and find adventure in the big city! But in his haste, Tom forgot a shovel and a wheelbarrow. There was no way he could dig up the treasure. Even if he did, there was no way to transport it back to his home.

Tom racked his brain; there had to be an answer. And then, he had it! From his pocket, Tom extracted a bright red ribbon. Tying it around the base of the tree, he knew it would guide him back to this exact spot. Before he released the leprechaun, however, Clever Tom showed why he earned his nickname: he extracted a promise from the fairy (who, being a fairy, could not tell a lie) that the leprechaun would not remove the ribbon from the tree. Satisfied with the positive response, Tom released the leprechaun and raced home to gather his supplies.

When Tom returned, his heart sank. No, the leprechaun had not removed the ribbon. He promised he wouldn’t, after all. But he did tie an identical ribbon on every other tree for miles in every direction. Clever Tom wasn’t the clever one after all.

 

Three Forms of Information Manipulation

This story has several morals, but let’s reimagine those lessons for our purposes. Clever Tom and the Leprechaun is a story about information manipulation in its three forms.

Did the leprechaun prevent people from telling their stories? No. They did not censor the information. Although church officials at the time of the original tale in the 19th century often discouraged these types of tales, the stories nonetheless got out.

Did the leprechaun make the buried treasure difficult to find? Yes! You needed to satisfy a certain set of conditions—and the first was capturing a crafty and quick leprechaun—to learn this information. In this case, Clever Tom lucked out when he found the leprechaun sleeping. This is information friction—deliberately making facts hard to find.

How did the leprechaun prevent Tom from collecting the treasure? He did not remove it. In fact, he hid it in plain sight…among thousands of other ribbons. That’s information flooding—hiding critical facts in an ocean of irrelevant ones.

As it turns out, the leprechaun might have a new career as an official in the Chinese government.

 

The People’s Republic Of China

When most people in Western countries think about the “Chinese” internet, they’ve probably heard of products and services strikingly similar to their U.S. counterparts: Alibaba (Amazon), Xiaomi (Apple or Samsung), or Sina Weibo (Twitter). There are critical differences, of course. Chinese counterparts filling the same market niches serve a far larger group of people. China has four times the number of citizens as the United States. More pointedly, those products and services operate under the aegis of the Chinese government, submitting to its guidelines regarding information monitoring and censorship.

Those who know more about the Chinese internet (often those who have traveled or worked in mainland China) criticize the government for its “crackdowns” on “dissidents” and their rampant censorship of any information unfavorable to the communist party. While there is evidence of these actions, their information is limited in scope.

Does anyone in the United States truly know what happens inside the so-called Great Firewall?

It turns out, someone does. Gary King, Weatherhead University Professor at Harvard University, and his team at the Institute for Quantitative Social Science, are a prolific bunch, focusing their considerable research talent on discovering exactly the answer to that question (gking.harvard.edu). King’s team began with the assumption that the Chinese government copies American internet and technology companies, and then controls (via censorship) their activities to keep a watchful and constant eye on each citizen.

What they discovered casts considerable doubt on our assumptions. Even how they learned it was ingenious. King’s team tracked information posted to popular Chinese social media sites and then watched what happened. It may be a small amount of time before a computer or human censor could act on a piece of content, but it was measurable. If they could reverse engineer the censorship priorities, they could better understand the government’s purpose in manipulating information.

At the risk of vast oversimplification of a sophisticated approach, here are their conclusions:

  1. Censorship is real, but it’s limited. Yes, some types of content were routinely censored. That content included posts critical of the censors themselves, certain hot-button issues, and “adult” content (yes, exactly what you’re thinking). What surprised them was what was not censored. Criticism of the government itself routinely was left alone. As was most commentary on social issues, and even foreign news. That was surprising. If censorship was not the go-to method, what was it?
  2. Information friction played a larger role. Remember, information friction refers to the process of making access to data just a little bit more difficult. King’s team found that less-desirable information proved slower to access (Westerners will understand this well: virtual private network—or VPN—services often are quite slow). Internet users value speed over most everything else; they will choose the faster source over the slower one most of the time.
  3. As did information flooding. King’s team also found evidence of the so-called 50-cent army, named for the small amount of money they make for each pro-government post they make on social media. These posts crowd out other content, forcing all other information off the scrolling, timeline-oriented social media feeds we’re all used to. In other words, people could scroll through hundreds of posts to find the one they want, and may do so on occasion, but will not do so consistently. In this way, friction and flooding work together to drown out content the Chinese government deems undesirable…and conversely, promote content it wants people to know.

From this study, King’s team could determine the priorities of the Chinese government about its information-gathering and management machine. In a country of nearly 1.4 billion people, there is no way to proactively monitor all government officials and activities in its vast bureaucracy. It needs information, and social media posts are an excellent way to get it. Some critics counter that the idea of “Big Brother” (an American, not Chinese idea, by the way) encourages self-censorship. But this defeats the purpose. If people can’t talk, the government won’t know. Hence, outright censorship is rarer than we might think. If the government doesn’t like something, friction and flooding are far more effective ways to manage the situation.

However, there is one thing that will trip the censors: Collective action. King’s team discovered that you can complain all you like—in fact, that’s encouraged—but if you want to organize your friends to act for change, you are likely to be censored in some creative ways. Yes, your post might be removed, but it is more likely to be dead-ended. In other words, you may be able to publish your post…but your friends may never see it. You get to say what you like and “get it off your chest”, but not make changes. That’s the government’s job. Not yours.

Clearly, the Chinese government has a different set of priorities than U.S. or Western governments, but are they really that different? Do information friction and flooding work (or work differently) in the West as well?

 

The United States of America

The United States does censor information. The government can classify certain types of information for security purposes, but those instances are comparatively rare. However, the government does indeed make certain information harder to get (friction) and bury information in a sea of less salient data (flooding). We can see that at work at all levels of government, from local officials requiring citizens to visit their government office during business hours to request information in person, to the highest officials sending myriad news releases (or dozens of late-night Tweets) to obscure important new facts.

So yes, at a certain level, information friction and flooding are part of the Western government toolbox. However, unlike China, the U.S. government faces pushback from both ordinary citizens and organized groups (e.g. the American Civil Liberties Union) who push for open records laws and easier access to information. Many information professionals have submitted a FOIA (Freedom of Information Act) request and are familiar with the process.

If governmental data were all that was in discussion, we could end here. It is not. Unlike China, information friction and flooding are common techniques of Western organizations. We rarely recognize them as such, and therefore fail to recognize and mitigate their impact. Let’s dissect common techniques to illustrate the impact of information friction and flooding in the United States.

  1. Friction: Catch and Kill. This is a common technique used routinely by tabloid news organizations. When a powerful/wealthy person or organization wants the details of a story “buried,” they may approach a tabloid organization. The tabloid will then approach key subjects with knowledge of the story, offering them payment for exclusive publishing rights. Once the contract is signed (always including a strict non-disclosure clause), the tabloid will exercise its right not to publish the story. Yes, other persons or organizations might have supplemental details to the story, but the tabloids are smart. They “lock up” (or “catch and kill”) the critical sources of information, thereby making stories of embarrassment or wrongdoing much more difficult to investigate.

Other examples of information friction include:

  • Demanding a formal request submission for “free” information,
  • burying detailed webpages in confusing menu structure,
  • using “nofollow” code to stymie search engines,
  • limiting access to information in native languages,
  • and saving text documents as images to prevent easy machine-readability.
  1. Flooding: Ratings Reductions. Celebrities, restaurants, and other service professionals are often the victim of organized groups of people conspiring to “down rate” their product or service on popular social media ratings sites (Amazon, Yelp, Netflix, Uber, eBay, etc.) using a “flood” of negative/one-star reviews. There is nothing explicitly illegal here, although these services try hard to make this technique difficult to execute. However, determined groups often easily circumvent these protections.

Other examples of information flooding include:

  • Releasing large amounts of data at one time (often during a weekend or over a holiday),
  • presenting all pieces of information as equally valuable and of equal weight,
  • following the letter of the law on mandatory disclosures and releasing thousands of pages of poorly formatted documents (also an example of friction…in fact, the two often work well together.)

 

What You Can Do About Information Friction and Information Flooding 

I wrote the original version of this article for an online publication specifically targeting so-called “Information Professionals.” They include legal librarians, academics, data scientists, and research journalists.

Frankly, I was surprised by how surprised they were regarding the sophistication of information manipulation. If the professionals are confused, what hope does the average consumer of information have to sort out what’s happening?

Paradoxically, I think it is easier for consumers to find and counter information manipulation that it is for professionals working inside organizations. Think about it: are you going to risk losing your job by calling out bad behavior? Yes, whistleblowers exist, but the average worker has a mortgage to pay and health insurance to keep (this is a big deal in the United States, foreign readers).

Here are a few ways to know when you could be a victim of information friction:

  • Are you being asked to submit a formal request for information that should be publicly available by law or statute?
  • Most websites are easy and intuitive to navigate … but when you get to the “disclosures” section, does the navigation turn into a labyrinth of dead links and confusing language?
  • Does your search engine find zero results?
  • Does the information exist, but only behind a login or paywall?
  • Is information available only in one language, when the audience clearly speaks multiple languages and lives in multiple countries?
  • Is the information available, but saved as an un-tagged “picture file” (e.g. a PNG or JPG) to make it difficult for auto-translation or text-recognition tools to work?

None of these techniques are necessarily underhanded. There could be good (and legal) reasons for putting up roadblocks to finding information. Just know that when you see them, be careful. They are ways that organizations can claim to be providing you information, but also making it difficult for you to get it. They know that most people won’t try. They can have their cake and eat it too.

Perhaps even more common than information friction is it doppelganger: Here are a few ways to know when you could be a victim of information flooding:

  • Do you need to wade through hundreds (or thousands) or pieces of information to find what you’re looking for?
  • Is information released over a weekend or holiday?
  • Is critical information buried in the middle of a larger data set, not at the front? (In other words, not in journalist “invested pyramid” style?)
  • Does your information come in the form of a flood of late-night tweets or Facebook posts?

Again, organizations could argue that it is not their job to be journalists, nor is it their responsibility to cull out the most important information – potentially embarrassing themselves in the process.

If you see these techniques at work, you may or may not be manipulated. But I think it’s better to understand them, recognize them, and question them.

Caveat emptor.

 

Not A New Story

If all of this seems frustrating, take heart. We’ve been struggling with friction and flooding for a long time. Linda Shute retold an earlier story, The Field of the Boliauns, originally written as part of an anthology of Celtic fairy tales by Joseph Jacobs in 1892. (In the original story, the leprechaun hadn’t fallen asleep, he had passed out. Stories are always true to the morals of their times, and the late 19th century was the heyday of the temperance movement.) He based his work on earlier oral tradition dating back to medieval Ireland and England. Those tales made it across the English Channel by way of Roman Legionaries recounting stories of Julius Caesar and his contemporaries in the Roman Senate in the first century before the common era.

In other words, information friction and information flooding are nothing new. Recognizing and mitigating their impacts has been a game of cat and mouse we’ve been playing for the better part of two millennia. That’s not to say we should give up the struggle, but rather that we’re in good historical company.

 

About Jason Voiovich

Jason’s arrival in marketing was doomed from birth. He was born into a family of artists, immigrants, and entrepreneurs. Frankly, it’s lucky he didn’t end up as a circus performer. He’s sure he would have fallen off the tightrope by now. His father was an advertising creative director. One grandfather manufactured the first disposable coffee filters in pre-Castro Cuba. Another grandfather invented the bazooka. Yet another invented Neapolitan ice cream (really!). He was destined to advertise the first disposable ice cream grenade launcher. But the ice cream just kept melting!

He took bizarre ideas like these into the University of Wisconsin, the University of Minnesota, and MIT’s Sloan School of Management. It should surprise no one that they are all embarrassed to have let him in.

These days, instead of trying to invent novelty snack dispensers, Jason has dedicated his career to finding marketing’s north star, refocusing it on building healthy relationships between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, he invites you to connect with him on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to him personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit his blog at https://jasontvoiovich.com/ and sign up for his mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

 

##

Note: A version of this article was originally published on Online Searcher in their September/October 2018 edition.

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

Using Google Maps costs more than you think.

Your creepy stalker ex-boyfriend knows you just left the gym. I’m sure he’s over you.

Google Maps is free, isn’t it?

It seems like a question with an obvious answer, doesn’t it? Of course, Google Maps is free. I’ve never been asked to enter my credit card to look up a new address. There is no subscription plan. There is no pay wall.

But just because you are not exchanging money to use Google Maps does not mean you are not exchanging value. I intend to show you just how much. You might not like it.

We’ll use Google Maps to help us walk through a basic use case and better understand the value exchange, but there are plenty of other examples. Let’s begin.

  1. You’re traveling from Minneapolis to Omaha (a long drive, by the way). By the time you arrive, you’re like to want something to eat. You open the Google Maps app, search for “Omaha, Nebraska,” and then search for “nearby restaurants.”
  2. If you haven’t given the Google Maps app on your phone the permission to use your location information, it will ask you for that. It’s obvious, isn’t it? But think about that for a moment. Google Maps doesn’t need to know where you are to show you restaurants in Omaha. There are no “terms and conditions” to read. There is only an “accept” button. You click it.
  3. Google Maps shows you a list of restaurants, reviews, and distances. Remember, you gave it permission to know where you are right now. That’s cool, huh? Assuming you find a restaurant you like, Google Maps can give you turn-by-turn driving directions with live traffic updates … and with connections to some other apps, and based on your estimated arrival time, even put your name on the wait list for a table so that you can walk right in.

Pretty amazing, isn’t it?

For many of us, this use case is so routine that it’s almost unremarkable. But for anyone used to car trips with the family as a kid in the 1980s (and the inevitable and horrifying gas station restaurant food), Google Maps delivers something close to magic.

In fact, the experience is so magical that we often don’t think beyond that simple interaction.

Let’s do that, shall we?

 

Here’s the part of the value exchange that you might not see.

  1. What restaurants did Google Maps show you? Unless you searched for a specific restaurant, you likely saw only those restaurants that paid for contextual advertising on that search. (At the very least, you saw the paid listing first, and on a small mobile screen, you may not have scrolled past them.) No, a human being didn’t make the decision to show you one restaurant versus another. An advertising algorithm did. Someone at a “top result” restaurant decided they wanted to appear when you typed in the “restaurants in Omaha” search.
  2. To run that advertising algorithm, Google needed to aggregate historical user data so that the restaurant would know how much to pay to advertise against those searches. The advertiser does not see your individual data when you run your search (nor will they at any time), but Google uses that data to judge demand for any specific search. That’s how Google makes a vast majority of its revenue: Advertising. By using Google Maps, you are improving that advertising engine with both your individual and aggregate data.
  3. In a similar way, Google uses your data to plot driving/transit/footpath options to your destination. At the aggregate level, Google uses that data to generate live traffic reports. There’s no Google Helicopter flying over Omaha as traffic reporters did in the 1980s. Their solution is more complicated, but it’s quite a bit safer and more effective: If Google notices a lot of users on the highway, and also notices that they are all moving slowly, it adjusts its time arrival estimates.
  4. All of Google products and services interconnect. That’s why you’ll see Google Reviews for those restaurants. (Actually, Google sometimes gets in anti-trust trouble for not showing you competitors’ ratings systems.) Most people aren’t going to stop searching for a restaurant to submit a public comment to a regulator complaining that they’re not receiving Yelp reviews alongside the Google Reviews. People are busy. It’s understandable. But part of the value you’ve just exchanged is the ability for Google to lock out an alternative service and keep that revenue for itself.

Okay, so you’ve exchanged more value than you thought for the use of Google Maps, but there’s still no money out of pocket for you. You’re still winning, right?

In fact, most of you might agree that more contextual advertising is better advertising. Additionally, you might understand why Google needs to collect individualized data so that it can aggregate it and deliver useful services back to you. What’s more, someone needs to pay for all this, and you’re glad it’s not you. Advertising, especially if it’s good advertising, is a pretty small price to pay. And the anti-competitive concerns? They’re a bit beyond your pay grade. Other people will take care of that stuff. You’re hungry. And Google Maps solved your problem.

At this point, I can’t disagree. The logic holds up. But how about we take just one more step? After we’re done, I want you to ask yourself if you’re still comfortable using Google Maps.

 

There’s a bigger market for “you are here” than you thought.

Here’s the first thing to understand about most location apps: Once you give them permission to track your location, they’ve got it until you turn it off. That means when you clicked “Accept” that one time, most apps have the authority (and ability) to collect information about you while you go about other activities. In fact, that one app may have shared location data with other apps … again, all with your “permission.”

So. What happens next?

If you’re like most people (me included, until recently) the answer was I don’t know.

Last week, the New York Times answered that question. They certainly weren’t the first, but they absolutely have the largest reach, and their journalists know how to tell a good story. You can read the full article for yourself, but let me quote directly the crux of their findings:

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.

Unlike sometimes-justified / sometimes-not criticism of the New York Times, I don’t see a “big business conspiracy” around every corner. Most business people are people too – they’re your colleagues, siblings, parents, and friends. They’re also customers and users of these products. Most businesses simply are trying to earn an honest profit providing a reasonable service in an increasingly competitive world.

But the fact remains: now that these apps have your permission (and there are a lot of apps that do this), and they have the location data your phone generates, they create something of value. That value creation is like crack cocaine to the average marketing VP, chief executive, or controller. In many ways, location data is some of the best data to have because it is not based on your opinion (likes, shares, comments) but rather it’s based on your behavior. As our grandparents taught us: Actions speak louder than words.

And wow are we ever speaking with our actions. You may wonder if you are “interesting enough” to warrant deeper interest from Google (I did). But when you consider the vast array of potential interested parties, you can see how you just became the most interesting person in the world. Let’s look at just a few of the reasons other parties are interested in your location data:

  • Retailers (and investors in retail operations) are interested in actual foot traffic, not “estimates” of foot traffic. By merging mobile phone data with real-time foot traffic, retailers know the quality of potential customers as well as the quantity of them.
  • Employers love location data. It helps them reconfigure building layouts to optimize placement of both individuals and teams. On the darker side, it also allows employers to know how often you use the restroom, if you and a colleague are having, ahem, a relationship, or how long you spend tethered to your desk.
  • The days of ambulance chasing lawyers are long gone. With location data, they can send ads to any mobile phone in the emergency room of your local hospital.
  • Law enforcement is a special case. They can subpoena your mobile phone records for a variety of legal reasons, but usually with probable cause. But with the technology available to advertisers and others, law enforcement can watch known high-crime areas and merge that data with publicly-available mobile phone data – data that you freely provide.

That’s just a few. I could go on.

I’ll bet even with those few examples, you are getting a sense for the broader market for “you are here” than you ever thought possible. Yes, you’re getting a “free” service, but you’re also trading away more than you bargained for.

Even at this point, I can see an argument that goes like this: Well, this is aggregated data, right? If it’s aggregated with millions of other people (or at least dozens of others), picking me out of a crowd is difficult. I can still blend in, right?

 

Russian hackers, Nigerian princes, and your stalker ex-boyfriend can pick you out of the crowd.

Read this and let it sink in: You are not anonymous.

Just because Google’s data center is secure, and its partners are bound by its terms of service, does not mean either it nor its partners are invulnerable to a coordinated hacking attempt. As you may recall, it wasn’t Target Corporation’s IT department that caused its massive 2013 data breach, it was a third-party contractor with lax controls.

Just because you’re a United States citizen doesn’t mean the rules are the same in other countries. Frequent visitors to Russia are “pretty sure” they’re being tracked. Frequent visitors to China are “absolutely sure” they’re being tracked. And once those governments have your unique device identifier, they can identify you when you return home.

Just because Google (or Facebook, or whomever) has a “policy” about data privacy doesn’t mean it will stay that way. Silicon Valley’s spinning moral compass don’t give me a warm fuzzy. Google might be in the public eye, but what about that concert app you downloaded? Did you read their policy? Probably not. Do you think that comparatively tiny company cares as much as Google about privacy? Probably not. Do you think they have the resources Google does to protect your data? Probably not. Put even more simply: Policies are policies, not laws.

Here’s the most important part: just because one source of data is aggregated, doesn’t means there isn’t individual data in the public record. This is the real beauty of the New York Times reporting. With a few simple steps, their journalists and technicians – with no supercomputing power or complex artificial intelligence – could link up aggregated user behaviors to public databases (housing, political donations, etc.) and reverse engineer individual people from the aggregated data.

Well, fuck.

 

Let’s ask ourselves some tough questions about location services, shall we?

  1. Is using location services worth the invasion of your privacy?
  2. Is using location services worth you getting fired from your job?
  3. Is using location services worth getting your private relationship details exposed?
  4. Is using location services worth getting financial data stolen?
  5. Is using location services worth being stalked?
  6. Is using location services worth your children being followed to school?

I wish this was simply hyperbole or an academic exercise. I wish I could believe tech CEOs when they tell us that “everybody wins” when we all use these location-based services. I, for one, am tired of “winning” like that.

I wonder what happens when consumers start to think that they’re paying too much for “free” services. I wonder what happens to tech company valuations. I wonder what happens when consumers start opting out.

 

If you’re not ready to “opt out” just yet, here are a few things you can do to protect yourself:

  • Learn how to turn off location services. Here is how you do it on Apple and Android
  • Clean up unused apps on your phone. If you haven’t turned on an app in a year, delete it and all its data. That won’t prevent it from using data it has already collected, but it will prevent you from providing more. And more apps than you think collect location data.
  • Buy a “Prince box” for your phone. What’s a “Prince box,” you say? When I visited Paisley Park (the home and studio of the late artist), I could take my phone…but I needed to carry it around in a locked, RFID-proof case. You can buy one too. Here’s an option.
  • If all else fails, turn your phone off when you don’t want to be monitored. Don’t simply put it to sleep.
  • Start signing up for services that allow you to monetize your data. These services are not ready for prime time for the most part, but they all you to take some control over your data, and more importantly, begin to train consumers that their data is an asset to be monetized. I like this one.

 

Worried about your customers getting wise to you? Here are some things you can do as a business to respect your consumer’s rights:

Finding a tech-workaround isn’t the answer. It will simply erode trust and postpone the day of reckoning. Forward-thinking companies (Apple, for one) are already deploying these techniques to stay on the right side of all of us:

  • Be transparent. Tell people why you are seeing an advertisement, why you need to give your location, and for how long you need it. Instead of leaving location services on, turn off tracking automatically when you’re done with that explicit need.
  • Allow people rate the quality of what they’re seeing and the service you provide in real time. You’ll have better data on your service that you can use to improve it.
  • Give people the option to pay you for your service easily and securely. YouTube Red (aka YouTube Premium) does this to allow consumers to opt out of ads. (Despite that, they still track you, so I call it a half-right idea. I’d pay for YouTube Platinum for them to avoid tracking me altogether.)
  • Destroy identifying individualized data as it is created. If you never have it, you’re never tempted to abuse it, and it can never fall into the wrong hands (either a hacker or an acquirer).
  • Default to an “opt in” versus “opt out” philosophy. It’s better for you anyway; you’ll know that your customers are truly interested in your service. (Bluntly, I wish this worked better than it does. Email (CANSPAM) and National Do Not Call do this already, although they haven’t reduced my inbox spam nor have they reduced junk calls to my mobile phone.)
  • Use your clout to lobby for GDPR-style legislation in the United States. It’s not perfect, but it has a place, and it’s going in the right direction.

Consumers are getting angry. They may not be able to put their finger on it, but consumer advocates, journalists at the New York Times, and writers and researchers like me are ripping back the curtain and directing consumer rage where it belongs.

If you’re a smart consumer, you’ll protect yourself and take action. It is only a matter of time before the abuse of location services puts your life and livelihood at risk.

If you’re a smart organization, you’ll get in front of this. Because in the not-too-distant future, “treating people as you would like to be treated” might be your most important product.

 

###

About Jason Voiovich

I am a recovering marketing and advertising executive on a mission to rehumanize the relationship between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, I invite you to connect with me on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to me personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit my blog at https://jasontvoiovich.com/ and sign up for my mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

Source notes for this article:

 

How The Times Analyzed Location Tracking Companies

Want to know how the New York Times crunched the data to pull out individual people among aggregated data? This article walks you through the process. It’s transparent, and more than a little creepy.

 

Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

This is the article itself. Again, it’s not the first, but it’s the best one of all I reviewed. It’s worth the 20-30 minutes it will take you to carefully read it.

 

Finally, I think it’s only fair to provide you direct links to Google’s policies and safety tips. Here are a couple of good starting points:

https://safety.google/privacy/ads-and-data/

https://policies.google.com/privacy#infosharing

I picked on Google, but please understand, they are at least open about it. Additionally, they are highly visible. There are plenty of people (like me) who will pounce if they make a change. But the thousands of lesser-known apps from no-name developers? Good luck.

Categories
Audience Empowerment Information Management Long Form Articles Rehumanizing Consumerism

The Bullshit Algorithm

If you use Swiffer WetJet, you are a puppy murderer.

But wait, you say. How could I? P&G would never lie tome about the safety of their Swiffer® WetJet™?! But of course. All of those “chemicals.” How could I be so stupid!

Yep. You’re a cold-blooded murderer. Wasn’t it lucky that you can tell your story on Facebook? Now, no one else will need to suffer what your family has suffered. You can warn us. Why don’t you go ahead?

Well, okay. I’ll tell you…

I recently had a neighbor who had to have their 5-year old German Shepherd dog put down due to liver failure. The dog was completely healthy until a few weeks ago, so they had a necropsy done to see what the cause was. The liver levels were unbelievable, as if the dog had ingested poison of some kind. The dog is kept inside, and when he’s outside, someone’s with him, so the idea of him getting into something unknown was hard to believe. My neighbor started going through all the items in the house. When he got to the Swiffer Wetjet, he noticed, in very tiny print, a warning which stated “may be harmful to small children and animals.” He called the company to ask what the contents of the cleaning agent are and was astounded to find out that antifreeze is one of the ingredients.(actually he was told it’s a compound which is one molecule away from anitfreeze).Therefore, just by the dog walking on the floor cleaned with the solution, then licking it’s own paws, and the dog eating from its dishes which were kept on the kitchen floor cleaned with this product, it ingested enough of the solution to destroy its liver.

Soon after his dog’s death, his housekeepers’ two cats also died of liver failure. They both used the Swiffer Wetjet for quick cleanups on their floors. Necropsies weren’t done on the cats, so they couldn’t file a lawsuit, but he asked that we spread the word to as many people as possible so they don’t lose their animals.

Source: Snopes.com 

Of course, this is a hoax. You may have seen it make the rounds last year…perhaps as recently as a few months ago. But doesn’t it sound convincing? It should. As a professional persuader, I can help tell you why. This story has lots of goodies(17 in fact, but more on that later). Let’s recap the top four:

  1. The helpless and innocent subject: Who is more innocent than the family dog? He doesn’t know better. It’s your job as the owner to protect him from harm, and you failed.
  2. The details:It wasn’t just “a dog”, it was a “5-year old German Shephard”. It wasn’t just that the dog died, it was the sequence of events of walking on the floor, licking his paws, eating from dishes kept on the floor.
  3. Seemingly scientific facts: The writer was brilliant here. If he or she has given the chemical formula, most people would have buzzed right by it. But “one molecule away from antifreeze” … now that’s scary!
  4. Corroborating evidence: The neighbor’s cats also died of similar circumstances (liver failure plus Swiffer WetJet usage). Just in case you thought this might be an isolated incident, your pet is in danger too!

If I were trying to damage the sales of the Swiffer WetJet product line, I could hardly do better. Yes, stories like this one made the rounds before the rise of Facebook, but their impact was much more limited. In the time it took misinformation to spread, the product owner would have the time to craft and spread its own rebuttal. If the situation were serious enough, it could run advertisements. It could update its product packaging. It had options.

But today, stories like this one “go viral” so quickly and with such ferocity that P&G had no time to mount a defense. Yes, Snopes will (eventually) debunk the story, but that can take weeks. By then, sales suffer, and consumer trust erodes.

Isn’t it funny? Wasn’t the promise of data-driven, search engine and social media algorithms that they would amplify the truth and protect us from misinformation by tapping the wisdom of crowds? The fact is that they do not. And cannot. Because that is not what they are designed to do. At the heart of every social media algorithm is a fatal flaw that values persuasion over facts.

Social media platforms (as well as search engines) are not designed for truth. They are designed for popularity. They are bullshit engines.

To understand how we got here, we need to take a step back and understand bullshit.

- You lied to me. - It wasn't lies. It was just bullshit.
My dad loved this movie. This is a classic scene.

Best. Academic. Paper. Ever.

Harry G. Frankfurt, professor of philosophy at Princeton University asked the obvious question in 2005:

“One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share.But we tend to take the situation for granted. Most people are rather confident of their ability to recognize bullshit and to avoid being taken in by it. So the phenomenon has not aroused much deliberate concern, or attracted much sustained inquiry. In consequence, we have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves.”

One of the oddest things about this paper, and I highly recommend you read the entire 20 pages, is the thorough disassembly of a topic everybody knows exists, but no one seems to understand.

Frankfurt made bullshit a technical term.

Here’s the crux of it: Most of us tend to think of the world in terms of facts and fictions, truths and lies. As we become more sophisticated, we understand people can have different perceptions (read:opinions) about the value truth brings or harm lies cause. However, those opinions exist on a different level than the “objective foundation” of fact and fiction.

Professional persuaders know this is not the way the world works.

The purpose of much of the communication we see – between people in our private lives, our consumer relationships, and the political sphere – is not to illuminate the truth, but rather to persuade. In fact, a mix of truths, half-truths, and outright lies is a great way to do it.Real facts are messy, incomplete, and often contradict each other. Outright lies can be fact-checked and objectively disproven. On the other hand, a skilled bullshitter can weave a tidy and convincing story based on a mix of facts and fictions. Facts are indeed objective facts to the bullshitter, but their value is not their factual basis, but rather their ability to persuade. A half-truth or lie might do just as well. The entire spectrum is at the bullshitter’s disposal, where his non-bullshitting competitor only has the facts. It’s not a fair fight.

Bullshit, aka“truthiness.”

Frankfurt makes the case that bullshit has a place in everyday life. Without it, we would be paralyzed with uncertainty and unable to make the simplest decisions and tend to the most basic relationship tasks. (Are you really going to tell your husband his haircut looks stupid?) Bullshit is as natural as…well…bullshit.

So, if bullshit is natural, and perhaps even necessary, where’s the problem? We’ve been dealing with bullshit since the instant we developed culture and language. What’s different now?

The Search Engine, Social Media, Data-Driven (Bull)shit Storm

The internet generally, and social media specifically, is not a truth platform, it is a popularity platform. That might come asa major surprise to many of you, or as blindingly obvious, but it’s important to unpack how these algorithms work so that we can understand the depth of the bullshit problem.

The bullshitty foundation of the internet as we know it: Search and Social

At a high level, how does a search engine algorithm work?The basic concept is authority. In short, that means how credible one source of truth is than another. In some cases, that’s obvious: Your state’s department of motor vehicles website is probably a more authoritative source for driver licensing procedures than your cousin’s floral arrangement blog. But it’s not humans that make those judgments. Algorithms need to do that work for obvious reasons of scope and scale.

Those non-human algorithms need clear rules for how to determine credibility. One of those important rules is simple: How many other websites link back to that one website for a particular search term or function? Link backs are an important proxy for credibility. Yes, it’s more complicated than that (Google, Bing and others strip out obvious gaming of that system), but at its heart, “authority” equals “popularity”, not truth, and not facts.

In other words, your cousin’s floral blog could become a leading authority on driver licensing with enough time and effort … and others agreeing that it is an authoritative source by linking to it in the context of that search term.This is the “wisdom of crowds” idea in a nutshell – the ultimate authority rests in shared agreement of “truth,” not actual truth based on objective facts.

Let’s translate:Sometimes search engines are right. Sometimes they’re wrong. But they always represent persuasion and popularity. Search engines are bullshit engines.

Let’s translate again: That little search window on your computer that you rely on to find facts is feeding you bullshit. Remember, true bullshit has some fact and some fiction, but it’s all persuasion. So yes, you’re getting some facts, some of the time. But just as often you’re getting hoodwinked.

If a search engine is a bullshit engine, social media is a bullshit rocket.

Social media algorithms completely dispense with the idea of truth. They are designed to enhance social connections. What drives a social media algorithm is something more than authority in a search engine (although that still matters). The most important driver of the algorithm is engagement, aka social proof. That takes the form of likes, clicks, shares, comments, reposts, etc.

The higher the engagement, the more authority the post (and author) have, especially when certain posts “go viral.” All that means is that the engagement rate gains enough attention fast enough to feed on itself, bending the exponential curve.

Most of the time, what goes viral are puppy videos, prom dances, pratfalls, and pornography. Mostly harmless, but let’s ignore those for now.

Every social media algorithm – every one of them – uses some proprietary combination of those factors (along with advertising dollars) to determine what becomes “popular” consistently. It’s not hard to spot. With a little training, you can do it too.

Here’s your first lesson: What story seems more likely to go viral?

  1. Sustained wellness comes from eating a balanced diet of healthy food, lowering stress, and exercising regularly.
  2. Drinking bleach is the most effective way to stay hydrated during the summer months.
  3. You can lose up to five pounds in the first two days using a clove and pomegranate enema.

The first is obvious, but boring, truth. No chance for virality there. The second is just as obviously a lie. (Please don’t try that at home. You’ll die.) The third is pure bullshit, and you can see immediately why it’s so compelling. It seems like it could have some truth to it. That one has potential!

Let this sink in: The two most common ways you learn about your world, the search engine and the social media timeline, are designed from the ground up to feed you bullshit.

It gets worse. You aren’t as good as you think you are at detecting bullshit.

Sure. A clove and pomegranate enema seems like bullshit(although I can think of stranger things). If you try one, I think you deserve what you get. But for most people, when we see examples like that one, we feel pretty confident we can pick bullshit out of our social media feed and safely ignore it.

We’re wrong.

To paraphrase a more famous phrase: You may be able to catch all of the bullshit some of the time, and you will catch some of the bullshit all of the time, but you will never catch all of the bullshit all of the time.

Your social media feed scrolls by too quickly. There are too many stories. There is not enough time. No one has the energy to fact check every story that floats by or every search result that finds its way to page one. What’s worse, until today, many of you believed search engines and social media platforms somehow prioritized the truth over bullshit. They do not. They prioritize authority and popularity – a bullshitter’s two favorite foods.

The average person sees thousands of search engine results and social media posts each day. You physically cannot fact check them all. No one can. It is a virtual certainty you have been bullshitted today. And the worst part? You don’t know which ones they were.

If we’re going to be continually drenched in a bull shitstorm, we could use an umbrella.

I think it’s only fair we built our own bullshit algorithm.

To the uninitiated, an algorithm seems like some bizarre technical concept that only engineers and programmers can understand – that you need to learn special language skills or grow a thick beard. You don’t. An algorithm is super easy: It’s a set of rules. Let’s write a simple one right now, shall we?

IF the weather outside EQUALS “raining”,

THEN pack an umbrella.

Yep. That’s it. That’s all there is to it. In fact, algorithms are all around you. All recipes are algorithms. So is (essentially)all of mathematics. You are so familiar with algorithms that you write, perform, and revise them every day without thinking about them. And yes, software algorithms (like those designed to drive an autonomous car) are super complicated. But that doesn’t mean we should be scared of the basic premise.

Anyone can do it.

Remember the game “20 Questions”? That game was a sort of algorithm. Here’s my adaptation for detecting bullshit.

Step 1: Open your social media feed and pick out a story. It can be any story.

Step 2: Read the story and answer the following 20 questions.

Step 3: The more questions you answer “yes” to, the higher the likelihood that story is bullshit.

Does the story…

1. …feature a powerless, helpless, or disadvantaged victim?

2. …push a political or identity hot button?

3. …result in the most dramatic outcome possible (death versus injury)?

4. …include irrelevant details (details not directly relevant to the crux of the situation)?

5. …suggest a simplistic next step or action (get rid of X, stop eating Y)?

6. …include a “twist” in the story, a surprise, or a big reveal?

7. …feature “scientism” (little evidence with big conclusions)?

8. …include hard to verify evidence (no links to reputable source, or only links to other non-authoritative sources)?

9. …use anecdotal versus statistical corroborating evidence?

10. …make grammatical or spelling errors, or use clumsy language?

11. …use over the top emotional appeals incongruent with the situation?

12. …use scientific jargon (e.g. “dihydrogen monoxide” instead of the more common “water”)?

13. …attempt to be relatable using the experience of people “like you”?

14. …make spurious correlations (seeing patterns of related items that could have other causes)?

15. …dangle dread (chemicals!) without explaining the context of risks?

16. …push for urgent, immediate action?

17. …include charts, graphs, images, or videos that don’t have anything to do with the core features of the story?

18. …hint at a conspiracy, that someone is hiding something (ideally, a “big corporation” or “big government”)?

19. …publish first in a “bullshit attractor” (TED Talk, Facebook, etc.)?

20. …include statistics touting its popularity (e.g. how many people are talking about this)?

Let’s apply our new Bullshit Detection Algorithm to our Swiffer story from earlier. How’d it score? Pretty well, actually! It received a 17 out of 20 by my count. How could we have made it even bullshittier? (Remember, you don’t have to stick with the facts.)

Item 2: Add a detail about the owners of the dog as “Trump supporters.”

Item 18: Hint that the author knew some who worked at P&G who “had information” about these pet deaths, but she would be fired if she said anything.

Item 20: Include the number of “likes” or “shares” in the article, showing its popularity.

Easy, isn’t it?

Where do we go from here?

It’s not realistic to take every story you read through your new Bullshit Detection Algorithm. It’s also not realistic to stop using search engines and social media. They are too ingrained in the fabric of our daily lives. Maybe we should crowdsource a Chrome plugin to help automate the process of Bullshit Detection…to fight fire with fire? Let me know if you’ll throw in20 bucks.

But at the very least, you can rest easy that you didn’t kill your dog by cleaning your floors with a Swiffer WetJet. And if you’re considering losing weight using a clove and pomegranate enema, you might want to try your new Bullshit Detection Algorithm first.

###

About Jason Voiovich

I am a recovering marketing and advertising executive on a mission to rehumanize the relationship between consumers and businesses, between patients and clinicians, and between citizens and organizations. That’s a tall order in a data-driven world. But it’s crucial, and here’s why: As technology advances, it becomes ordinary and expected. As relationships and trust expand, they become stronger and more resilient. Our next great leaps forward are just as likely to come from advances in humanity as they are advances in technology.

If you care about that mission as well, I invite you to connect with me on LinkedIn. If you’re interested in sharing your research, please take the extra step and reach out to me personally at jasonvoiovich (at) gmail (dot) com. For even more, please visit my blog at https://jasontvoiovich.com/ and sign up for my mailing list for original research, book news, & fresh insights.

Thank you! Gracias! 谢谢!

Your fellow human.

Source notes for this article:

Swiffer WetJet PetDanger

No bullshit here. You can read the story (and the fact checking) for yourself.

Swiffer WetJetHardwood Floor Spray Mop Starter Kit

If you’re curious, you can see the ingredients list for yourself. I’ll warn you, P&G doesn’t give you a chemistry lesson. For example, you’ll need to find a different (authoritative) site for more information on the ingredients.

Here is a link for more on PROPYLENE GLYCOL n-BUTYL ETHER.

Again, unless you have a background in chemistry, more information can get even more confusing.This is the one “chemically close” to Antifreeze, or ETHYLENE GLYCOL…that’s whyI picked it. It’s clearly not antifreeze, but it sure sounds like it, doesn’t it? If you look at the CDC entry for this one, however, it’s basically screaming at you to run to the hospital if you ingest too much of it. Lots of chemical names sound the same, but are very different. I sort of wish I paid more attention in chemistry in college…

On Bullshit, Harry Frankfurt, Princeton University

This is what I hoped every academic paper would be like in graduate school, and while some were quite good, and informative, and interesting, nothing was as satisfying to read as Frankfurt’s 20 pages. Thanks, Jeremy Rose, for having us read it in grad school!

Categories
Information Management Rehumanizing Consumerism

Sorry, Mr. Musk. Advertising speech is not free speech.

This is a post for the non-marketers out there, so if you’re in marketing, take a knee on this one. Last week, Elon Musk (of Tesla and SpaceX fame) tweeted that he had plans to take Tesla private and that he had the funding secured to do so. Let’s leave aside the SEC issues for a moment (and whoah boy, there are issues) and concentrate on Twitter as a public platform.

Many people conflate political speech, free speech, libel/slander, and advertising speech. We won’t get into a legal definition of each one, but suffice to say, just because a certain politician tweets marginally-true (or outright false) stuff doesn’t mean *you* can. That is especially true of a CEO of a publicly-traded company. What Mr. Musk says on Twitter is not simply his “personal opinions” and could be interpreted as “advertising speech”.

Advertising speech is one of the only types of speech that *must be true* by law. Most people don’t realize that…until it’s too late. Puffery is generally okay (Tesla has the best cars and you should buy them!), but falsehoods are not (Tesla has completely autonomous driving options). The FTC takes a dim view of false advertising in general, and specifically as it relates to certain categories of products.

It’s fascinating stuff. Elon Musk, you might want to read it.

#marketing #freespeech #advertising

Categories
Agile Learning Audience Engagement Information Management Rehumanizing Consumerism

Never underestimate the allure of free beer in your crisis management strategy.

It seems that SeaWorld is on its way back. In the aftermath of the “Blackfish” documentary in 2013, park attendance dropped between 20 and 25 percent. Initially reluctant to admit anything was wrong (seem familiar?), SeaWorld management did a 180 (seem unfamiliar?). It announced an end to its captive Orca breeding program. It began to wind down its animal shows. It redirected its resources to conservation in partnership with the Humane Society of the United States. Even the staunchly anti-SeaWorld PETA had to grudgingly admit that it was a step in the right direction.

But despite all of those actions, SeaWorld languished. It’s easier to cancel a vacation than book one, and there are plenty of options in Orlando (and elsewhere). SeaWorld management then, quietly, began reinvesting in the “product” offering – the park itself. It repriced its admission to offer more value for the money than ever-increasing Disney tickets. It redesigned its promotional campaigns. It established new channel partnerships.

Oh. And they started giving away free beer this summer.

It may seem like a gimmick, but when you were founded by Anheuser-Busch, it makes sense. When you’re planning a vacation with kids, every little bit counts. Giving parents free beer is enough to tip the scales for many people – to the tune of a 5 percent increase in attendance this year. On that pace, SeaWorld will fully recover the attendance it lost in three more years.

Hmm. The formula becomes clear: Crisis occurs. Admit you were wrong. Make the necessary business changes. Then make the necessary marketing changes. Add alcohol. Win.

#marketing #blackfish #seaworld

Read a more detailed summary of the financial picture on MarketWatch.

Photo credit: