Our opinion: Something for nothing?


The wires that carry the world wide web were all abuzzing last week over the news that Facebook had manipulated news feeds as part of an academic study into users' emotions. Along with all of the sound and fury, the Electronic Privacy Information Center filed formal legal documents with the Federal Trade Commission, alleging that Facebook engaged in deceptive trade practices and violated a 2012 Consent Order it entered into with the Federal Trade Commission.

The complaint asks the FTC to begin an investigation, claiming that Facebook deceived its users by failing to inform them their data would be shared with researchers and that they were being subjected to behavioral testing.

"The data-scientists were trying to collect evidence to prove their thesis that people's moods could spread like an 'emotional contagion' depending on the type of the content that they were reading," notes Jennifer Newton, for The Daily Mail. "The study concluded that people were more likely to post negative updates about their lives after the volume of positive information appearing in their Facebook feeds had been purposefully reduced by the researchers. The opposite reaction occurred when the number of negative posts appeared in people's news feeds."

During a conference in India, Facebook CEO Cheryl Sandberg, kind of sort of apologized for altering the news feeds of up to 700,000 of its users.

"This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you."

"Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone," posted Adam D.I. Kramer, one of the study's authors and a Facebook employee, on, well, where else but Facebook? "In hindsight, the research benefits of the paper may not have justified all of this anxiety."

Newton contends it might be hard to prove Facebook broke the law, but what it did was definitely unethical.

"None of the participants in the Facebook experiments were explicitly asked for their permission, though the social network's terms of use appears to allow for the company to manipulate what appears in users' news feeds however it sees fits. Facebook's data-use policy says the California- based company can deploy user information for 'internal operations, including troubleshooting, data analysis, testing, research and service improvement.'" But Kashmir Hill, writing for Forbes, notes the word "research" wasn't inserted into Facebook's user agreement until May 2012, five months after the one-week long manipulation of the news feeds was conducted. It will be up to the FTC to determine whether Facebook did, in fact, break the law.

"A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed - legally and ethically - to meet certain minimal standards," James Grimmelmann, a law professor at the University of Maryland, told The Atlantic. "Anything on Facebook that is actually an ad is labeled as such (even if not always clearly). This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded."

Needless to say (but we will anyway), any social network or website that offers "free" services has an ulterior motive. The Internet is not a wideopen buffet, free to indulge our appetites. Those upset over having their content manipulated are living in a dream world. This has been going on for years and years. Why do you think giant search engines such as Google, Bing and Yahoo offer "free" email?

Facebook is valued at anywhere between $40 billion and $100 billion. It didn't reach that estimation by giving everyone a free page to post baby or cat pictures or to share conspiracy theories. It did it by using "suggested" posts, which are nothing more than advertisements and out-and-out ads such as those for Dollar Shave Club and eZillow. Monetizing its services means Facebook needs to determine whether its software adequately interprets users' interests and emotions, and that's part of the reason it conducted the study.

Joe Deville, of the University of London, notes that what Facebook did is not really new.

"Corporations have long sought to understand and exploit consumers' emotions, and it's not just advertisers," he writes for The Conversation. Deville studies debt collectors and payday lenders. "Many of us, then, are unwittingly part of experiments, in which our reactions to a range of emotional stimuli are being tested. But so too were previous generations of consumers. And like us, they often didn't know about it. Given that the Facebook furor has likely highlighted to companies the dangers of making such experiments public, it is probable that future generations will know even less."

Maria Konnikova, writing for The New Yorker, is less concerned about ethics and more concerned about the way Facebook went about conducting its test. She notes that understanding emotion is far more complex than looking for certain words, especially if those words are taken completely out of context, as they were in this study.

"The same positive or negative word can carry a range of meanings, many of which are divorced from actually experiencing an emotion. Writing 'I'm sorry you're sad,' for instance, doesn't actually mean that you are sad yourself - only that you have become aware of someone else's emotion and are acknowledging it. One way to view the increase in positive posts when people see positive content isn't that they've become happier. Instead, it's that they're trying to one-up the positive posts themselves - or at least show that they're on the same page. And when positive posts decrease, people might not actually feel less happy - they might just feel less of a need to selfaggrandize in order to keep up with their friends."

But Elliot Berkman, of the University of Oregon, told Konnikova that Facebook and other social media sites could use properly designed studies to mine data on mental illness or depression.

"While the cost of a false positive - telling someone that she is at greater risk of depression when she's not - is high, the cost of missing the signs of mental illness is potentially even bigger."

We can't muster much consternation over Facebook's manipulation of its users news feeds, because, simply put, we never expected Facebook and its administrators to have any sort of integrity or scruples, and surely no covenant to anything other than its bottom line. But the thought that social media could be used to ferret out mental illness is both unsettling and reassuring at the same time. Could some highly developed algorithm help prevent a suicide? Or maybe even stop a gun-toting maniac from shooting up a school? That would be a welcome innovation, but at the same time, it's disconcerting to think that every word we post on Facebook and Twitter or any message sent via Gmail is being analyzed by a computer somewhere in the bowels of corporate America.

But, as long as we insist on utilizing "free" services on the Internet, we will always be subject to such intrusiveness. To think otherwise is self-deception.


If you'd like to leave a comment (or a tip or a question) about this story with the editors, please email us. We also welcome letters to the editor for publication; you can do that by filling out our letters form and submitting it to the newsroom.

Powered by Creative Circle Media Solutions