Freakonomics Q&A
I just did a Q&A on the Freakonomics blog. Nothing regular readers of this blog haven’t heard before, but it was fun all the same. There’s also a Slashdot thread on the Q&A.
Page 24 of 39
I just did a Q&A on the Freakonomics blog. Nothing regular readers of this blog haven’t heard before, but it was fun all the same. There’s also a Slashdot thread on the Q&A.
This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.
Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.
But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.
The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.
I don’t see anything by 2017 that will fundamentally alter this. Do you?
Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:
The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”
You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.
If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.
I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.
I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace—and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?
You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.
Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada—50 million people—was caused by a software bug.
I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet—and the computers and processes connected to it—is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.
Yes, IT systems will continue to become more critical to our infrastructure—banking, communications, utilities, defense, everything.
By 2017, the interconnections will be so critical that it will probably be cost-effective—and low-risk—for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.
While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security—but neither you nor I is going to like it. That trend is IT as a service.
By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.
Marcus Ranum: You’re right about the shift toward services—it’s the ultimate way to lock in customers.
If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.
That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.
So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.
Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.
Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.
It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.
And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.
I’m reminded of the post-9/11 anti-terrorist hysteria—we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.
Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system—they may even have convinced themselves it will improve security—but it’s fundamentally a control system. And in the long run, it’s going to hurt security.
Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.
By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure—and there’ll probably have been at least one real disaster by then—that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.
Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.
I don’t think we’re going to start building real security. Because real security is not something you build—it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.
The future will be captive data running on purpose-built back-end systems—and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing—or other forms of making security someone else’s problem—will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.
I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?
Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.
EDITED TO ADD (12/4): Commentary on the point/counterpoint.
I’ve been saying this for a while now:
Since the outbreak of a cybercrime epidemic that has cost the American economy billions of dollars, the federal government has failed to respond with enough resources, attention and determination to combat the cyberthreat, a Mercury News investigation reveals.
“The U.S. government has not devoted the leadership and energy that this issue needs,” said Paul Kurtz, a former administration homeland and cybersecurity adviser. “It’s been neglected.”
Even as the White House asked last week for $154 million toward a new cybersecurity initiative expected to reach billions of dollars over the next several years, security experts complain the administration remains too focused on the risks of online espionage and information warfare, overlooking the international criminals who are stealing a fortune through the Internet.
This is Part III of a good series on cybercrime. Here are Parts I and II.
Interesting study: “Identity Fraud Trends and Patterns: Building a Data-Based Foundation for Proactive Enforcement,” October 2007. It’s long, but at least read the executive summary. Or, even shorter, this Associated Press story:
Researchers reviewed 517 cases closed by the Secret Service between 2000 and 2006. Two-thirds of the cases were concentrated in the Northeast and South and there were 933 defendants. The Federal Trade Commission has said about 3 million Americans have their identities stolen annually.
The study found that 42.5 percent of offenders were between the ages of 25 and 34. Another 18 percent were between the ages of 18 and 24. Two-thirds of the identity thieves were male.
Nearly a quarter of the offenders were born outside the United States.
Eighty percent of the cases involved an offender working solo or with a single partner, the report found.
While identity thieves used a wide combination of methods, fewer than 20 percent of the crimes involved the Internet. The most frequently used non-technological method was the rerouting of mail through change of address cards. Other prevalent non-technological methods were mail theft and dumpster diving.
Of the 933 offenders, 609 said they initiated their crime by stealing fragments of personal identifying information, as opposed to stealing entire documents, such as bank cards or driver’s licenses.
Most of the offenses were committed by non-employees who victimized strangers. Employee insiders were the offenders in just one-third of the 517 cases. When an employee did commit identity theft, the offenders were employed in a retail business in two out of every five instances, the report said. Stores, gas stations, car dealerships, casinos, restaurants, hotels, doctors and hospitals were all considered retail operations in the study.
In about a fifth of the cases, the employee worked in the financial services industry.
Here’s a interesting paper from Carnegie Mellon University: “An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants.”
The paper focuses on the large illicit market that specializes in the commoditization of activities in support of Internet-based crime. The main goal of the paper was to understand and measure how these markets function, and discuss the incentives of the various market entities. Using a dataset collected over seven months and comprising over 13 million messages, they were able to categorize the market’s participants, the goods and services advertised, and the asking prices for selected interesting goods.
Really cool stuff.
Unfortunately, the data is extremely noisy and so far the authors have no way to cross-validate it, so it is difficult to make any strong conclusions.
The press focused on just one thing: a discussion of general ways to disrupt the market. Contrary to the claims of the article, the authors have not built any tools to disrupt the markets.
It’s not true that no one worries about terrorists attacking chemical plants, it’s just that our politics seem to leave us unable to deal with the threat.
Toxins such as ammonia, chlorine, propane and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous. According to the Environmental Protection Agency, there are 7,728 chemical plants in the United States where an act of sabotage—or an accident—could threaten more than 1,000 people. Of those, 106 facilities could threaten more than a million people.
The problem of securing chemical plants against terrorism—or even accidents—is actually simple once you understand the underlying economics. Normally, we leave the security of something up to its owner. The basic idea is that the owner of each chemical plant 1) best understands the risks, and 2) is the one who loses out if security fails. Any outsider—i.e., regulatory agency—is just going to get it wrong. It’s the basic free-market argument, and in most instances it makes a lot of sense.
And chemical plants do have security. They have fences and guards (which might or might not be effective). They have fail-safe mechanisms built into their operations. For example, many large chemical companies use hazardous substances like phosgene, methyl isocyanate and ethylene oxide in their plants, but don’t ship them between locations. They minimize the amounts that are stored as process intermediates. In rare cases of extremely hazardous materials, no significant amounts are stored; instead they are only present in pipes connecting the reactors that make them with the reactors that consume them.
This is all good and right, and what free-market capitalism dictates. The problem is, that isn’t enough.
Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn’t even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that’s the basic idea.
But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.
Sure, the owner could be sued. But he’s not at risk for more than the value of his company, and—in any case—he’d probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation’s chemical plants are secured to a much smaller degree than the risk warrants.
In economics, this is called an externality: an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs to him.
If we—whether we’re the community living near the chemical plant or the nation as a whole—expect the owner of that plant to spend money for increased security to account for those externalities, we’re going to have to pay for it. And we have three basic ways of doing that. One, we can do it ourselves, stationing government police or military or contractors around the chemical plants. Two, we can pay the owners to do it, subsidizing some sort of security standard.
Or three, we could regulate security and force the companies to pay for it themselves. There’s no free lunch, of course. “We,” as in society, still pay for it in increased prices for whatever the chemical plants are producing, but the cost is paid for by the product’s consumers rather than by taxpayers in general.
Personally, I don’t care very much which method is chosen: that’s politics, not security. But I do know we’ll have to pick one, or some combination of the three. Asking nicely just isn’t going to work. It can’t; not in a free-market economy.
We taxpayers pay for airport security, and not the airlines, because the overall effects of a terrorist attack against an airline are far greater than their effects to the particular airline targeted. We pay for port security because the effects of bringing a large weapon into the country are far greater than the concerns of the port’s owners. And we should pay for chemical plant, train and truck security for exactly the same reasons.
Thankfully, after years of hoping the chemical industry would do it on its own, this April the Department of Homeland Security started regulating chemical plant security. Some complain that the regulations don’t go far enough, but at least it’s a start.
This essay previously appeared on Wired.com.
Excellent three–part series on trends in criminal malware:
When Jackson logged in, the genius of 76service became immediately clear. 76service customers weren’t weren’t paying for already-stolen credentials. Instead, 76service sold subscriptions or “projects” to Gozi-infected machines. Usually, projects were sold in 30-day increments because that’s a billing cycle, enough time to guarantee that the person who owns the machine with Gozi on it will have logged in to manage their finances, entering data into forms that could be grabbed.
Subscribers could log in with their assigned user name and password any time during the 30-day project. They’d be met with a screen that told them which of their bots was currently active, and a side bar of management options. For example, they could pull down the latest drops—data deposits that the Gozi-infected machines they subscribed to sent to the servers, like the 3.3 GB one Jackson had found.
A project was like an investment portfolio. Individual Gozi-infected machines were like stocks and subscribers bought a group of them, betting they could gain enough personal information from their portfolio of infected machines to make a profit, mostly by turning around and selling credentials on the black market. (In some cases, subscribers would use a few of the credentials themselves).
Some machines, like some stocks, would under perform and provide little private information. But others would land the subscriber a windfall of private data. The point was to subscribe to several infected machines to balance that risk, the way Wall Street fund managers invest in many stocks to offset losses in one company with gains in another.
[…]
That’s why the subscription prices were steep. “Prices started at $1,000 per machine per project,” says Jackson. With some tinkering and thanks to some loose database configuration, Jackson gained a view into other people’s accounts. He mostly saw subscriptions that bought access to only a handful of machines, rarely more than a dozen.
The $1K figure was for “fresh bots”—new infections that hadn’t been part of a project yet. Used bots that were coming off an expired project were available, but worth less (and thus, cost less) because of the increased likelihood that personal information gained from that machine had already been sold. Customers were urged to act quickly to get the freshest bots available.
This was another advantage for the seller. Providing the self-service interface freed up the sellers to create ancillary services. 76service was extremely customer-focused. “They were there to give you services that made it a good experience,” Jackson says. You want us to clean up the reports for you? Sure, for a small fee. You want a report on all the credentials from one bank in your drop? Hundred bucks, please. For another $150 a month, we’ll create secure remote drops for you. Alternative packaging and delivery options? We can do that. Nickel and dime. Nickel and dime.
And about banks not caring:
As much as the HangUp Team has relied on distributed pain for its success, financial institutions have relied on transferred risk to keep the Internet crime problem from becoming a consumer cause and damaging their businesses. So far, it has been cheaper to follow regulations enough to pass audits and then pay for the fraud rather than implement more serious security. “If you look at the volume of loss versus revenue, it’s not horribly bad yet,” says Chris Hoff, with a nod to the criminal hacker’s strategy of distributed pain. “The banks say, ‘Regulations say I need to do these seven things, so I do them and let’s hope the technology to defend against this catches up.'”
“John” the security executive at the bank, one of the only security professionals from financial services who agreed to speak for this story, says “If you audited a financial institution, you wouldn’t find many out of compliance. From a legal perspective, banks can spin that around and say there’s nothing else we could do.”
The banks know how much data Lance James at Secure Science is monitoring; some of them are his clients. The researcher with expertise on the HangUp Team calls consumers’ ability to transfer funds online “the dumbest thing I’ve ever seen. You can’t walk into the branch of a bank with a mask on and no ID and make a transfer. So why is it okay online?”
And yet banks push online banking to customers with one hand while the other hand pushes problems like Gozi away, into acceptable loss budgets and insurance—transferred risk.
As long as consumers don’t raise a fuss, and thus far they haven’t in any meaningful way, the banks have little to fear from their strategies.
But perhaps the only reason consumers don’t raise a fuss is because the banks have both overstated the safety and security of online banking and downplayed negative events around it, like the existence of Gozi and 76service.
The whole thing is worth reading.
Magic fingers and an unerring eye gave “Hologram Tam,” one of the best forgers in Europe, the skills to produce counterfeit banknotes so authentic that when he was arrested nearly £700,000 worth were in circulation.
Thomas McAnea, 58, who was jailed for six years and four months yesterday, was the kingpin of a professional operation based in Glasgow that, according to police, had the capacity to produce £2 million worth of fake notes a day enough potentially tom destabilise the British economy. More may remain out there undetected.
[…]
“Some of Hologram Tam’s money is still out there. It’s that good that if I gave you one of his notes, you wouldn’t know it,” a police source said.
The detectives also found templates for other forgeries including passports, driving licences, ID cards, bank statements, utility bills, MoT certificates, postage and saving stamps and TV licences.
The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: “230 dead as storm batters Europe.” Those who opened the attachment became infected, their computers joining an ever-growing botnet.
Although it’s most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It’s also the most successful example we have of a new breed of worm, and I’ve seen estimates that between 1 million and 50 million computers have been infected worldwide.
Old style worms—Sasser, Slammer, Nimda—were written by hackers looking for fame. They spread as quickly as possible (Slammer infected 75,000 computers in 10 minutes) and garnered a lot of notice in the process. The onslaught made it easier for security experts to detect the attack, but required a quick response by antivirus companies, sysadmins and users hoping to contain it. Think of this type of worm as an infectious disease that shows immediate symptoms.
Worms like Storm are written by hackers looking for profit, and they’re different. These worms spread more subtly, without making noise. Symptoms don’t appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.
Storm represents the future of malware. Let’s look at its behavior:
This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect.
One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won’t work with Storm: An infected host may only know about a small fraction of infected hosts—25-30 at a time—and those hosts are an unknown number of hops away from the primary C2 servers.
And even if a C2 node is taken down, the system doesn’t suffer. Like a hydra with many heads, Storm’s C2 structure is distributed.
Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can’t imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn’t work in any case: Storm’s creators could easily design another worm—and we know that users can’t keep themselves from clicking on enticing attachments and links.
Redesigning the Microsoft Windows operating system would work, but that’s ridiculous to even suggest. Creating a counterworm would make a great piece of fiction, but it’s a really bad idea in real life. We simply don’t know how to stop Storm, except to find the people controlling it and arrest them.
Unfortunately we have no idea who controls Storm, although there’s some speculation that they’re Russian. The programmers are obviously very skilled, and they’re continuing to work on their creation.
Oddly enough, Storm isn’t doing much, so far, except gathering strength. Aside from continuing to infect other Windows machines and attacking particular sites that are attacking it, Storm has only been implicated in some pump-and-dump stock scams. There are rumors that Storm is leased out to other criminal groups. Other than that, nothing.
Personally, I’m worried about what Storm’s creators are planning for Phase II.
This essay originally appeared on Wired.com.
EDITED TO ADD (10/17): Storm is being partitioned, presumably so parts can be sold off. If that’s true, we should expect more malicious activitity out of Storm in the future; anyone buying a botnet will want to use it.
Slashdot thread on Storm.
EDITEDT TO ADD (10/22): Here’s research that suggests Storm is shinking.
EDITED T OADD (10/24): Another article about Storm striking back at security researchers.
The U.S. has a patchwork of deposit laws on soft drink bottles and cans. Most states have no deposit, but some states—Michigan, for example—have deposits. The cans are the same, so you can make ten cents by buying a can in one state and then returning it for the deposit in Michigan.
Ten people have been arrested for making more than $500,000 doing this:
They ran grocery stores such as Save Plus Superstore in Pontiac, The Larosa Market In Sylvan Lake and Value Foods in Ypsilanti, police also raided The Farmer John, Savemart Food Center and the Americana foods, all three in Detroit.
Investigators alleged that millions of non-redeemable out-of-state cans were collected, crushed, packaged in plastic bags and sold at a discount to merchants who then redeemed them.
Bulk redemption payments from the state are based on weight.
Nice arbitrage scam.
Sidebar photo of Bruce Schneier by Joe MacInnis.