Entries Tagged "medicine"

Page 6 of 9

Mapping Drug Use by Testing Sewer Water

I wrote about this in 2007, but there’s new research:

Scientists from Oregon State University, the University of Washington and McGill University partnered with city workers in 96 communities, including Pendleton, Hermiston and Umatilla, to gather samples on one day, March 4, 2008. The scientists then tested the samples for evidence of methamphetamine, cocaine and ecstasy, or MDMA.

Addiction specialists were not surprised by the researchers’ central discovery, that every one of the 96 cities—representing 65 percent of Oregon’s population—had a quantifiable level of methamphetamine in its wastewater.

“This validates what we suspected about methamphetamine use in Oregon,” said Geralyn Brennan, addiction prevention epidemiologist for the Department of Human Services.

Drug researchers previously determined the extent of illicit drug use through mortality records and random surveys, which are not considered entirely reliable. Survey respondents may not accurately recall how much or how often they use illicit drugs and they may not be willing to tell the truth. Surveys also gathered information about large regions of the state, not individual cities.

The data gathered from municipal wastewater, however, are concrete and reveal a detailed snapshot of drug use for that day. Researchers placed cities into ranks based on a drug’s “index load” – average milligrams per person per day.

These techniques can detect drug usage at individual houses. It’s just a matter of where you take your samples.

Posted on July 23, 2009 at 6:09 AMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy—privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations—similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates—so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights—”you can test my urine on any day in the future”—are not valid, but immediate signing away of privacy of privacy rights—”you can test my urine today”—are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt”—close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder—typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

Virginia Data Ransom

This is bad:

On Thursday, April 30, the secure site for the Virginia Prescription Monitoring Program (PMP) was replaced with a $US10M ransom demand:

“I have your shit! In *my* possession, right now, are 8,257,378 patient records and a total of 35,548,087 prescriptions. Also, I made an encrypted backup and deleted the original. Unfortunately for Virginia, their backups seem to have gone missing, too. Uhoh :(For $10 million, I will gladly send along the password.”

More details:

Hackers last week broke into a Virginia state Web site used by pharmacists to track prescription drug abuse. They deleted records on more than 8 million patients and replaced the site’s homepage with a ransom note demanding $10 million for the return of the records, according to a posting on Wikileaks.org, an online clearinghouse for leaked documents.

[…]

Whitley Ryals said the state discovered the intrusion on April 30, after which time it shut down Web site site access to dozens of pages serving the Department of Health Professions. The state also has temporarily discontinued e-mail to and from the department pending the outcome of a security audit, Whitley Ryals said.

More. This doesn’t seem like a professional extortion/ransom demand, but still….

EDITED TO ADD (5/13): There are backups, and here’s a Q&A with details on exactly what they were storing.

Posted on May 7, 2009 at 7:10 AMView Comments

Unfair and Deceptive Data Trade Practices

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on.

Buried in RealAge’s 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn’t spelled out, it’s not really informed consent. That’s deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn’t on your computer—it’s out in the “cloud” somewhere—and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It’s one of the fastest growing IT market segments—69% of Americans now use some sort of cloud computing services—but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I’m on its board of directors) filed a complaint with the Federal Trade Commission concerning Google’s cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google’s not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google’s negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that’s deceptive.

Facebook isn’t much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater“: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it’s not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine—the advantages might very well outweigh the risks—but users often can’t weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don’t want people to make informed decisions about where to leave their personal data. RealAge wouldn’t get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn’t get five million users if its webpage said “We’ll take some steps to protect your privacy, but you can’t blame us if something goes wrong.”

And of course, trust isn’t black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we’d all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google’s security to be perfect. But if it didn’t fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism—and that’s why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC’s Bureau of Consumer Protection said: “a company’s marketing materials must be consistent with the nature of the product being offered. It’s not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers’ computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don’t be surprised if the FTC comes calling.” That’s the right response from government.

A version of this article originally appeared on The Wall Street Journal.

EDITED TO ADD (2/29): Two rebuttals.

Posted on April 27, 2009 at 6:16 AMView Comments

What to Fear

Nice rundown of the statistics.

The single greatest killer of Americans is the so-called “lifestyle disease.” Somewhere between half a million and a million of us get a short ride in a long hearse every year because of smoking, lousy diets, parking our bodies in front of the TV instead of operating them, and downing yet another six pack and / or tequila popper.

According to the US Department of Health and Human Services, between 310,000 and 580,000 of us will commit suicide by cigarette this year. Another 260,000 to 470,000 will go in the ground due to poor diet and sedentary lifestyle. And some 85,000 of us will drink to our own departure.

After the person in the mirror, the next most dangerous individual we’re ever likely to encounter is one in a white coat. Something like 200,000 of us will experience “cessation of life” due to medical errors—botched procedures, mis-prescribed drugs and “nosocomial infections.” (The really nasty ones you get from treatment in a hospital or healthcare service unit.)

The next most dangerous encounter the average American is likely to have is with a co-worker with an infection. Or a doorknob, stair railing or restaurant utensil touched by someone with the crud. “Microbial Agents” (read bugs like flu and pneumonia) will send 75,000 of us to meet the Reaper this year.

If we live through those social encounters, the next greatest danger is “Toxic Agents”—asbestos in our ceiling, lead in our pipes, the stuff we spray on our lawns or pour down our clogged drains. Annual body count from these handy consumer products is around 55,000.

After that, the most dangerous person in our lives is the one behind the wheel. About 42,000 of us will cash our chips in our rides this year. More than half will do so because we didn’t wear a seat belt. (Lest it wrinkle our suit.)

Some 31,000 of us will commit suicide by intention this year. (As opposed to not fastening our seat belts or smoking, by which we didn’t really mean to kill ourselves.)

About 30,000 of us will die due to our sexual behaviors, through which we’ll contract AIDS or Hepatitis C. Another 20,000 of us will pop off due to illicit drug use.

The next scariest person in our lives is someone we know who’s having a really bad day. Over 16,000 Americans will be murdered this year, most often by a relative or friend.

Posted on April 7, 2009 at 6:14 AMView Comments

HIPAA Accountability in Stimulus Bill

On page 379 of the current stimulus bill, there’s a bit about establishing a website of companies that lost patient information:

(4) POSTING ON HHS PUBLIC WEBSITE—The Secretary shall make available to the public on the Internet website of the Department of Health and Human Services a list that identifies each covered entity involved in a breach described in subsection (a) in which the unsecured protected health information of more than 500 individuals is acquired or disclosed.

I’m not sure if this passage survived the final bill, but it will be interesting if it is now law.

EDITED TO ADD (3/13): It’s law.

Posted on February 18, 2009 at 12:28 PMView Comments

A Rational Response to Peanut Allergies and Children

Some parents of children with peanut allergies are not asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an Epipen.

This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever.

It’s so rare to see rational risk management when it comes to children and safety; I just had to blog it.

Related blog post, including a very lively comments section.

Posted on January 27, 2009 at 2:10 PMView Comments

"Nut Allergy" Fear and Overreaction

Good article:

Professor Nicolas Christakis, a professor of medical sociology at Harvard Medical School, told the BMJ there was “a gross over-reaction to the magnitude of the threat” posed by food allergies, and particularly nut allergies.

In the US, serious allergic reactions to foods cause just 2,000 of more than 30 million hospitalisations a year and comparatively few deaths—150 a year from all food allergies combined.

In the UK there are around 10 deaths each year from food allergies.

Professor Christakis said the issue was not whether nut allergies existed or whether they could occasionally be serious. Nor was the issue whether reasonable preventative steps should be made for the few children who had documented serious allergies, he argued.

“The issue is what accounts for the extreme responses to nut allergies.”

He said the number of US schools declaring themselves to be entirely “nut free”—banning staples like peanut butter, homemade baked goods and any foods without detailed ingredient labels—was rising, despite clear evidence that such restrictions were unnecessary.

“School entrances have signs admonishing visitors to wash their hands before entry to avoid [nut] contamination.”

He said these responses were extreme and had many of the hallmarks of mass psychogenic illness (MPI), previously known as epidemic hysteria.

Sound familiar?

Posted on December 19, 2008 at 6:56 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.