Hacking Happiness (7 page)

Read Hacking Happiness Online

Authors: John Havens

BOOK: Hacking Happiness
9.97Mb size Format: txt, pdf, ePub

When people get used to the dynamics of seeing their conversations visualized in this way, this could become a powerful tool to combat the well-documented effect of meetings being dominated by certain personality types.

Accountability-based influence will begin to look a lot different in the coming years with the adoption of these types of methodologies. But along with privacy concerns, let’s focus on the positive for a moment. Think about the meeting mediator being
adopted where you work. How many conversations have you not contributed to because a domineering colleague wouldn’t stop talking? Or if you’re like me (passionate and verbose), how would your work benefit from letting others contribute more to a meeting? If a visual aid encouraging them to speak would help them feel more appreciated, they’d be more likely to find meaning in their jobs. Taken to scale, multiple employees with this type of tool may feel more loyal toward an organization. This could result in lower attrition rates, providing a measurable return on investment (ROI) for an organization.

Similarly, I wonder how a device like Neumitra’s bandu stress bracelet could be utilized for the workplace. Imagine if a particularly nasty manager’s team of employees all wore the bandu for three months. Combined with proximity and voice analysis sensors, data could show when and how the manager interacted with his employees. Data from voice sampling might indicate a majority of meetings involved raised or angry tones. Data from stress sensors might indicate a majority of employees had stress levels that directly correlated with these raised tones.

It’s not rocket science—the data is simply reporting a visualization of the manager’s negative management style. More important, however, is the quantified data providing a record that the manager is adversely affecting the health of his employees. In the near future, measures of ROI or quarterly reporting will need to take into account whether a manager is contributing to the positive or negative health of their employees. Ongoing high stress levels could increase a company’s health insurance rates. In a world with sensors, manager accountability gets quantified.

Let’s take the opposite situation. A manager utilizing empowering feedback for their team uses the same technology, potentially coupled with a Cardiio heart monitor for added data value. In direct contrast to the other example, insights gained from time stamp data could show an employee who exhibited high levels of stress
had their anxiety lowered because of meeting with their manager. If they measured their resting heart rate before and after the meeting, data might show lower numbers for employees correlating to better health. In this scenario, our positive manager could be rewarded for improving morale, employee health, and saving the company money on their premiums.

I learned about the mobile technology agency Citizen after reading about them in the
Wired
article “What if Your Boss Tracked Your Sleep, Diet, and Exercise?”
7
The company has begun utilizing various sensor tools to measure employee health as a way to improve productivity. I interviewed Quinn Simpson, user experience director at Citizen, to discuss how he maintains a balance between privacy and innovation in his work. The team testing the implementation of sensors with Simpson are voluntarily providing data about their health, recognizing that it will take time for some colleagues to feel comfortable sharing various data. The company benefits from a strong corporate culture and a young demographic that is comfortable utilizing social media.

I asked Simpson about the idea of sensors with managers who might be overly negative (this is not a technology they’re currently providing, but might in the future). His perspective was on the potential value of measuring performance via multiple data points as a great indicator of employee burnout. Pushing staff too hard on an extended basis, especially in a creative setting like Citizen, could lead to high turnover and loss of productivity. As Simpson noted:

Because we keep track of what projects people are working on, we want to be able to look at data regarding their output and see when they risk burning out. Correlating relevant information points like this means you know when someone is not being as productive as they could be. So if I’m working too long on a project, both my manager and I want to know.
8

What’s encouraging about this example is how sensors and data promote unity among the staff. If a manager can quantify when a star creative is heading toward burnout, they can ask the employee to go home early or head to the gym. They will have data supporting their actions that better sustain their organization for the long term. Likewise, if identity or reputation models exist in their organization, they may earn more trust from employees for making a smarter choice for long-term gain versus short-term profits.

The New Reputation Economy

While I’ve focused on measuring accountability at work, it’s easy to see how the technology explained in my earlier examples can live outside the enterprise. In a world with finite resources, we may soon enter a time when we’ll see “sensible governments” utilizing technology that measures citizen behavior in an effort to improve their lives. While it’s easy to consider this to be a Big Brother situation, where we’ll be spied on at all times, let me describe a more supportive scenario that could come to pass in our Connected World.

There’s a one-hundred-million-ton collection of plastic particles eddying about in the ocean known as the Great Pacific Garbage Patch.
9
While the particles are very small and the ocean has high powers of self-restoration, we’d still be well advised to increase our focus on recycling. In the same way that Work.com has a rating for employees to evaluate peers, what would happen if citizens began evaluating one another based on their recycling efforts? Or what if the sensor environment around citizens could contribute to a person’s accountability rating as well?

Let’s say you buy a bottle of water at the convenience store. Mobile payment technology charges your debit card but also indicates the bottle is made of plastic from its bar code and should be
recycled. A time and date stamp with that information is sent to your town’s local recycling facility. Let’s say the facility has done a study and gathered enough data to know that your town’s average length of time between buying a bottle of water and consuming it is one week. After seven days, if you haven’t recycled your bottle, you might get a text reminding you to do so. If you’re storing the water, you could indicate that in your response.

Citizens who recycled on a regular basis might receive a tax break at the end of the year because their efforts meant the town would receive money paid for bottles returned in bulk. But if you opted to chug your water and chuck the bottle in the parking lot, you might get a small fine for not recycling properly. If you left the bottle in a wildlife preserve as indicated via your GPS, you might get a larger fine. If your “recycling reputation” dropped low enough, retailers might be banned from selling you certain products.

It’s not fun thinking negatively. But it’s also not realistic to think our actions in the Connected World won’t have negative consequences depending on the context. Where a community agrees on the types of things to measure and how privacy can be respected regarding data collection, more positive than negative results will occur.

This notion of communal benefit is reflected in a focus on sharing with the rising trend of collaborative consumption, a term introduced in 1978. Rather than drive individual consumerism, collaborative consumption encourages distribution of goods, skills, or money in peer-to-peer networks to preserve natural re-sources and lower individuals’ costs for items they can share. Com-panies like Airbnb encourage home swapping, and Zilok enables rentals between individuals for everything from power tools to game consoles. Sharing and rating provide a robust platform for accountability and reputation models to emerge, buoyed by the advent of the Web.

As the
Economist
notes in “The Rise of the Sharing Economy,”
“Before the Internet, renting a surfboard, a power tool, or a parking space from someone else was feasible, but was usually more trouble than it was worth. Now websites such as Airbnb, RelayRides, and SnapGoods match up owners and renters; smartphones with GPS let people see where the nearest rentable car is parked; social networks provide a way to check up on people and build trust; and online payment systems handle the billing.”
10

If personal data can remain protected in these systems of open innovation, the sharing economy is a powerful move toward fulfillment in the Connected World and a positive example of how accountability-based influence can foster community versus self-focused gain.

  3  

PERSONAL IDENTITY MANAGEMENT

I’m excited about where technology will take us. My biggest goal is to make sure that our privacy laws keep up with our technology. I want to make sure that all of the benefits that we see from new technology don’t come at the expense of our privacy and personal freedom.
1
SENATOR AL FRANKEN

S
ENATOR
A
L
F
RANKEN
is chairman of the Senate Subcommittee on Privacy, Technology, and the Law, a bipartisan part of the larger Senate Judiciary Committee. It’s a complex job to encourage growth of technology while honoring the nuance of consumer privacy. From the technology side, it’s easy to dwell on how privacy advocates may hinder innovation and growth. From the privacy side, a loss of trust from previous violations combined with a lack of understanding about technology slows adoption.

Both sides have merit and need to be heard. But the issues need some context:

  • People’s
    right
    to privacy is different from a person’s
    preference
    about privacy.
  • Just because a certain technology
    can
    be built doesn’t mean it
    should
    be.

Let’s unpack these ideas a bit.

Privacy is tough to both define and measure. Depending on the context, the activity that’s fine for one person may not be condoned for another. For instance, as a rule, most adults don’t have a problem with the idea that websites collecting information from children under the age of thirteen should comply with the Federal Trade Commission’s Children’s Online Privacy Protection Act (COPPA), which requires verifiable parental consent for PII, or Personally Identifiable Information to be collected about children. Gathering this PII for younger kids outside of the parameters of parental consent is typically seen as creepy or worse. COPPA also covers ideas of how cookies or other tracking mechanisms should or shouldn’t be utilized to collect behavioral data on kids.

But manipulating online systems of age recognition can be easier than you think, especially when parents help kids under the age of thirteen get onto sites. As the
Huffington Post
reported in their article “Under 13 Year Olds on Facebook: Why Do 5 Million Kids Log In if Facebook Doesn’t Want Them To?” a
Consumer Reports
study conducted in June 2012 revealed that “an estimated 5.6 million Facebook clients—about 3.5 percent of its U.S. users—are children who the company says are banned from the site.”
2
Surprisingly, many of the kids creating accounts are also getting help from their parents, according to the study.

Here’s where things get tricky: If a site can’t collect PII data about a user, it’s very hard to identify their age. And Facebook does regularly eliminate the younger users it identifies. The article also notes that Facebook could lose upwards of 3.5 percent of its U.S. market, however, if it were more vigilant at keeping kids off the site.

In light of this article, I’d like to restate my second issue from above with a little tweak:

  • Just because a certain technology
    hasn’t
    been built doesn’t mean it
    shouldn’t
    be.

Facebook has bigger priorities than creating technology that can accurately identify if a person is genuinely under the age of thirteen. That’s not a question—if 5.6 million users might be under the age of thirteen and Facebook isn’t actively creating a technology to ban them as mandated by the Federal Trade Commission, by definition their priorities are clear. The fact they’d stand to lose 3.5 percent of their U.S. market if kids were bumped from the site also speaks to their priorities.

Parents helping underage kids to game the system are acting on their personal preferences. The fact remains, however, that parents are breaking the spirit of COPPA when they help kids under thirteen get on Facebook and that the company stands to benefit when these kids join the site. And now those kids will start getting tracked earlier, with their data being utilized or sold in ways they don’t realize.

The fact that Facebook
hasn’t
built a technology to accurately identify if someone is under thirteen doesn’t mean they
can’t
. And as they’ve created the largest pool of photographic data in the world identifiable by facial recognition technology, I think they’d be able to block kids better if they wanted. Their facial recognition technology launched as opt-out only (versus having users take the extra step to opt in), implying they don’t want users to be able to opt out because it messes with their ability to monetize.

It’s this lack of clarity around privacy that is fostering distrust from users and helping to create the personal identity management industry.

The Context of Data

Data is like your health. You don’t really appreciate the way that data is being handled until something bad happens to you.
3

Other books

These Dreams of You by Steve Erickson
Just Another Job by Casey Peterson
Down 'N' Derby by Lila Felix
Retribution by Lea Griffith
The Onion Girl by Charles de Lint
The Most to Lose by Laura Landon
Almost Everything by Tate Hallaway