Entries Tagged "Facebook"

Page 8 of 11

The Effects of Social Media on Undercover Policing

Social networking sites make it very difficult, if not impossible, to have undercover police officers:

“The results found that 90 per cent of female officers were using social media compared with 81 per cent of males.”

The most popular site was Facebook, followed by Twitter. Forty seven per cent of those surveyed used social networking sites daily while another 24 per cent used them weekly. All respondents aged 26 years or younger had uploaded photos of themselves onto the internet.

“The thinking we had with this result means that the 16-year-olds of today who might become officers in the future have already been exposed.

“It’s too late [for them to take it down] because once it’s uploaded, it’s there forever.”

There’s another side to this issue as well. Social networking sites can help undercover officers with their backstory, by building a fictional history. Some of this might require help from the company that owns the social networking site, but that seems like a reasonable request by the police.

I am in the middle of reading Diego Gambetta’s book Codes of the Underworld: How Criminals Communicate. He talks about the lengthy vetting process organized crime uses to vet new members—often relying on people who knew the person since birth, or people who served time with him in jail—to protect against police informants. I agree that social networking sites can make undercover work even harder, but it’s gotten pretty hard even without that.

Posted on August 31, 2011 at 6:21 AMView Comments

Identifying People by their Writing Style

The article is in the context of the big Facebook lawsuit, but the part about identifying people by their writing style is interesting:

Recently, a team of computer scientists at Concordia University in Montreal took advantage of an unusual set of data to test another method of determining e-mail authorship. In 2003, the Federal Energy Regulatory Commission, as part of its investigation into Enron, released into the public domain hundreds of thousands of employee e-mails, which have become an important resource for forensic research. (Unlike novels, newspapers or blogs, e-mails are a private form of communication and aren’t usually available as a sizable corpus for analysis.)

Using this data, Benjamin C. M. Fung, who specializes in data mining, and Mourad Debbabi, a cyber-forensics expert, collaborated on a program that can look at an anonymous e-mail message and predict who wrote it out of a pool of known authors, with an accuracy of 80 to 90 percent. (Ms. Chaski claims 95 percent accuracy with her syntactic method.) The team identifies bundles of linguistic features, hundreds in all. They catalog everything from the position of greetings and farewells in e-mails to the preference of a writer for using symbols (say, “$” or “%”) or words (“dollars” or “percent”). Combining all of those features, they contend, allows them to determine what they call a person’s “write-print.”

It seems reasonable that we have a linguistic fingerprint, although 1) there are far fewer of them than finger fingerprints, 2) they’re easier to fake. It’s probably not much of a stretch to take that software that “identifies bundles of linguistic features, hundreds in all” and use the data to automatically modify my writing to look like someone else’s.

EDITED TO ADD (8/3): A good criticism of the science behind author recognition, and a paper on how to evade these systems.

Posted on August 3, 2011 at 6:08 AMView Comments

Assisting a Hostage Taker via Facebook

It’s a new world:

An armed Valdez, 36, held a woman hostage at a motel in a tense 16-hour, overnight standoff with SWAT teams, all while finding time to keep his family and friends updated on Facebook.

[…]

In all, Valdez made six posts and added at least a dozen new friends.

His family and friends responded with 100 comments. Some people offered words of support, and others pleaded for him to “do the right thing.”

[…]

“I’m currently in a standoff … kinda ugly, but ready for whatever,” Valdez wrote in his first post at 11.23pm “I love u guyz and if I don’t make it out of here alive that I’m in a better place and u were all great friends.”

[…]

At 2.04am, Valdez posted two pictures of himself and the woman. “Got a cute ‘Hostage’ huh,” Valdez wrote of the photographs.

At 3.48am, one of Valdez’ friends posted that police had a “gunner in the bushes stay low.” Valdez thanked him in a reply.

[…]

Police believe that responses from Valdez’s friend gave him an advantage.

Authorities are now discussing whether some of Valdez’ friends should be arrested and charged with obstruction of justice for hampering a police investigation. “We’re not sure yet how to deal with it,” said Croyle.

Posted on June 24, 2011 at 11:40 AMView Comments

Get Your Terrorist Alerts on Facebook and Twitter

Colors are so last decade:

The U.S. government’s new system to replace the five color-coded terror alerts will have two levels of warnings ­ elevated and imminent ­ that will be relayed to the public only under certain circumstances for limited periods of time, sometimes using Facebook and Twitter, according to a draft Homeland Security Department plan obtained by The Associated Press.

Some terror warnings could be withheld from the public entirely if announcing a threat would risk exposing an intelligence operation or a current investigation, according to the government’s confidential plan.

Like a carton of milk, the new terror warnings will each come with a stamped expiration date.

Specific and limited are good. Twitter and Facebook: I’m not so sure.

But what could go wrong?

An errant keystroke touched off a brief panic Thursday at the University of Illinois at Urbana-Champaign when an emergency message accidentally was sent out saying an “active shooter” was on campus.

The first message was sent on the university’s emergency alert system at 10:40 a.m., reaching 87,000 cellphones and email addresses, according to the university.

The university corrected the false alarm about 12 minutes later and said the alert was caused when a worker updating the emergency messaging system inadvertently sent the message rather than saving it.

The emails are designed to go out quickly in the event of an emergency, so the false alarm could not be canceled before it went out, the university said.

Posted on April 8, 2011 at 1:23 PMView Comments

Hacking HTTP Status Codes

One website can learn if you’re logged into other websites.

When you visit my website, I can automatically and silently determine if you’re logged into Facebook, Twitter, Gmail and Digg. There are almost certainly thousands of other sites with this issue too, but I picked a few vulnerable well known ones to get your attention. You may not care that I can tell you’re logged into Gmail, but would you care if I could tell you’re logged into one or more porn or warez sites? Perhaps http://oppressive-regime.example.org/ would like to collect a list of their users who are logged into http://controversial-website.example.com/?

Posted on February 2, 2011 at 2:26 PMView Comments

Whitelisting vs. Blacklisting

The whitelist/blacklist debate is far older than computers, and it’s instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier—although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t—but because it’s a security system that can be implemented automatically, without people.

To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino’s black book or the more general Griffin book. Some retail stores have the same model—a Google search on “banned from Wal-Mart” results in 1.5 million hits, including Megan Fox—although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?

National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.

Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist—the software can do it largely for free.

Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn’t make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you’re often limited to an Internet browser and a few common business applications.

Lately, we’re seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it’s being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you’re using?

Turns out that many people do. Apple’s control over its apps hasn’t seemed to hurt iPhone sales, and Facebook’s control over its apps hasn’t seemed to affect Facebook’s user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.

For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back—perhaps with a whitelist we maintain personally, but more probably with a blacklist.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus’s half there as well.

Posted on January 28, 2011 at 5:02 AMView Comments

Risk Reduction Strategies on Social Networking Sites

By two teenagers:

Mikalah uses Facebook but when she goes to log out, she deactivates her Facebook account. She knows that this doesn’t delete the account ­ that’s the point. She knows that when she logs back in, she’ll be able to reactivate the account and have all of her friend connections back. But when she’s not logged in, no one can post messages on her wall or send her messages privately or browse her content. But when she’s logged in, they can do all of that. And she can delete anything that she doesn’t like. Michael Ducker calls this practice “super-logoff” when he noticed a group of gay male adults doing the exact same thing.

And:

Shamika doesn’t deactivate her Facebook profile but she does delete every wall message, status update, and Like shortly after it’s posted. She’ll post a status update and leave it there until she’s ready to post the next one or until she’s done with it. Then she’ll delete it from her profile. When she’s done reading a friend’s comment on her page, she’ll delete it. She’ll leave a Like up for a few days for her friends to see and then delete it.

I’ve heard this practice called wall scrubbing.

In any reasonably competitive market economy, sites would offer these as options to better serve their customers. But in the give-it-away user-as-product economy we so often have on the Internet, the social networking sites have a different agenda.

Posted on December 1, 2010 at 1:27 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.