Entries Tagged "social media"

Page 3 of 13

Hacking Instagram to Get Free Meals in Exchange for Positive Reviews

This is a fascinating hack:

In today’s digital age, a large Instagram audience is considered a valuable currency. I had also heard through the grapevine that I could monetize a large following—or in my desired case—use it to have my meals paid for. So I did just that.

I created an Instagram page that showcased pictures of New York City’s skylines, iconic spots, elegant skyscrapers ­—you name it. The page has amassed a following of over 25,000 users in the NYC area and it’s still rapidly growing.

I reach out restaurants in the area either via Instagram’s direct messaging or email and offer to post a positive review in return for a free entree or at least a discount. Almost every restaurant I’ve messaged came back at me with a compensated meal or a gift card. Most places have an allocated marketing budget for these types of things so they were happy to offer me a free dining experience in exchange for a promotion. I’ve ended up giving some of these meals away to my friends and family because at times I had too many queued up to use myself.

The beauty of this all is that I automated the whole thing. And I mean 100% of it. I wrote code that finds these pictures or videos, makes a caption, adds hashtags, credits where the picture or video comes from, weeds out bad or spammy posts, posts them, follows and unfollows users, likes pictures, monitors my inbox, and most importantly—both direct messages and emails restaurants about a potential promotion. Since its inception, I haven’t even really logged into the account. I spend zero time on it. It’s essentially a robot that operates like a human, but the average viewer can’t tell the difference. And as the programmer, I get to sit back and admire its (and my) work.

So much going on in this project.

Posted on April 2, 2019 at 6:16 AMView Comments

Attacking Soldiers on Social Media

A research group at NATO’s Strategic Communications Center of Excellence catfished soldiers involved in an European military exercise—we don’t know what country they were from—to demonstrate the power of the attack technique.

Over four weeks, the researchers developed fake pages and closed groups on Facebook that looked like they were associated with the military exercise, as well as profiles impersonating service members both real and imagined.

To recruit soldiers to the pages, they used targeted Facebook advertising. Those pages then promoted the closed groups the researchers had created. Inside the groups, the researchers used their phony accounts to ask the real service members questions about their battalions and their work. They also used these accounts to “friend” service members. According to the report, Facebook’s Suggested Friends feature proved helpful in surfacing additional targets.

The researchers also tracked down service members’ Instagram and Twitter accounts and searched for other information available online, some of which a bad actor might be able to exploit. “We managed to find quite a lot of data on individual people, which would include sensitive information,” Biteniece says. “Like a serviceman having a wife and also being on dating apps.”

By the end of the exercise, the researchers identified 150 soldiers, found the locations of several battalions, tracked troop movements, and compelled service members to engage in “undesirable behavior,” including leaving their positions against orders.

“Every person has a button. For somebody there’s a financial issue, for somebody it’s a very appealing date, for somebody it’s a family thing,” Sarts says. “It’s varied, but everybody has a button. The point is, what’s openly available online is sufficient to know what that is.”

This is the future of warfare. It’s one of the reasons China stole all of that data from the Office of Personal Management. If indeed a country’s intelligence service was behind the Equifax attack, this is why they did it.

Go back and read this scenario from the Center for Strategic and International Studies. Why wouldn’t a country intent on starting a war do it that way?

Posted on February 26, 2019 at 6:10 AMView Comments

Alex Stamos on Content Moderation and Security

Former Facebook CISO Alex Stamos argues that increasing political pressure on social media platforms to moderate content will give them a pretext to turn all end-to-end crypto off—which would be more profitable for them and bad for society.

If we ask tech companies to fix ancient societal ills that are now reflected online with moderation, then we will end up with huge, democratically-unaccountable organizations controlling our lives in ways we never intended. And those ills will still exist below the surface.

Posted on January 15, 2019 at 5:55 AMView Comments

How Surveillance Inhibits Freedom of Expression

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance, but is true for corporate surveillance as well. We simply aren’t as willing to be our individual selves when others are watching.

Let’s take an example: hearing that parents and children are being separated as they cross the US border, you want to learn more. You visit the website of an international immigrants’ rights group, a fact that is available to the government through mass Internet surveillance. You sign up for the group’s mailing list, another fact that is potentially available to the government. The group then calls or e-mails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If, instead of visiting the website, you visit the group’s Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn’t matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn’t just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the Internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn’t exclusively an American event; Internet self-censorship is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments­—if found to be inoffensive or even essential to society—­cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress­—from ending slavery to fighting for women’s rights­—began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks’ right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it’s currently sitting between somewhat moral, and­—depending on the state or country in question—­tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn’t really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don’t yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they’re around. And they require privacy to germinate. Take away that privacy, and we’ll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do­—the different things each of us does­—that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don’t harm anyone. But it’s important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn’t necessarily the morality of everyone­—and that that’s okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It’s easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today’s surveillance capabilities in the 1920s, it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less “what if we didn’t?” thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a “social credit” rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone’s knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.

This essay originally appeared in McSweeney’s issue #54: “The End of Trust.” It was reprinted on Wired.com.

Posted on November 26, 2018 at 6:54 AMView Comments

Facebook Is Using Your Two-Factor Authentication Phone Number to Target Advertising

From Kashmir Hill:

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.” I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.

Here’s the research paper. Hill again:

They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.

Posted on October 2, 2018 at 5:53 AMView Comments

Manipulative Social Media Practices

The Norwegian Consumer Council just published an excellent report on the deceptive practices tech companies use to trick people into giving up their privacy.

From the executive summary:

Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.

The popups from Facebook, Google and Windows 10 have design, symbols and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option.

[…]

The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed and freely given.

I am a big fan of the Norwegian Consumer Council. They’ve published some excellent research.

Posted on June 28, 2018 at 6:29 AMView Comments

Intimate Partner Threat

Princeton’s Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access—delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages—but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­—like passwords and security questions—are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you’re in the same physical space. In many cases, the abuser might even own your phone—or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life—where you were born, what your first job was, the name of your pet.

Posted on March 5, 2018 at 11:13 AMView Comments

Facebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

It’s not a great solution, but it’s something:

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook’s global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

“If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States,” Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc’s Google also spoke.

“It won’t solve everything,” Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

Posted on February 20, 2018 at 6:34 AMView Comments

Department of Homeland Security to Collect Social Media of Immigrants and Citizens

New rules give the DHS permission to collect “social media handles, aliases, associated identifiable information, and search results” as part of people’s immigration files. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.

This is part of the general trend to scrutinize people coming into the US more, but it’s hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasionally asking for social media passwords at the border.

Posted on September 28, 2017 at 7:43 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.