Entries Tagged "control"

Page 6 of 8

Lock-In

Buying an iPhone isn’t the same as buying a car or a toaster. Your iPhone comes with a complicated list of rules about what you can and can’t do with it. You can’t install unapproved third-party applications on it. You can’t unlock it and use it with the cellphone carrier of your choice. And Apple is serious about these rules: A software update released in September 2007 erased unauthorized software and — in some cases — rendered unlocked phones unusable.

Bricked” is the term, and Apple isn’t the least bit apologetic about it.

Computer companies want more control over the products they sell you, and they’re resorting to increasingly draconian security measures to get that control. The reasons are economic.

Control allows a company to limit competition for ancillary products. With Mac computers, anyone can sell software that does anything. But Apple gets to decide who can sell what on the iPhone. It can foster competition when it wants, and reserve itself a monopoly position when it wants. And it can dictate terms to any company that wants to sell iPhone software and accessories.

This increases Apple’s bottom line. But the primary benefit of all this control for Apple is that it increases lock-in. “Lock-in” is an economic term for the difficulty of switching to a competing product. For some products — cola, for example — there’s no lock-in. I can drink a Coke today and a Pepsi tomorrow: no big deal. But for other products, it’s harder.

Switching word processors, for example, requires installing a new application, learning a new interface and a new set of commands, converting all the files (which may not convert cleanly) and custom software (which will certainly require rewriting), and possibly even buying new hardware. If Coke stops satisfying me for even a moment, I’ll switch: something Coke learned the hard way in 1985 when it changed the formula and started marketing New Coke. But my word processor has to really piss me off for a good long time before I’ll even consider going through all that work and expense.

Lock-in isn’t new. It’s why all gaming-console manufacturers make sure that their game cartridges don’t work on any other console, and how they can price the consoles at a loss and make the profit up by selling games. It’s why Microsoft never wants to open up its file formats so other applications can read them. It’s why music purchased from Apple for your iPod won’t work on other brands of music players. It’s why every U.S. cellphone company fought against phone number portability. It’s why Facebook sues any company that tries to scrape its data and put it on a competing website. It explains airline frequent flyer programs, supermarket affinity cards and the new My Coke Rewards program.

With enough lock-in, a company can protect its market share even as it reduces customer service, raises prices, refuses to innovate and otherwise abuses its customer base. It should be no surprise that this sounds like pretty much every experience you’ve had with IT companies: Once the industry discovered lock-in, everyone started figuring out how to get as much of it as they can.

Economists Carl Shapiro and Hal Varian even proved that the value of a software company is the total lock-in. Here’s the logic: Assume, for example, that you have 100 people in a company using MS Office at a cost of $500 each. If it cost the company less than $50,000 to switch to Open Office, they would. If it cost the company more than $50,000, Microsoft would increase its prices.

Mostly, companies increase their lock-in through security mechanisms. Sometimes patents preserve lock-in, but more often it’s copy protection, digital rights management (DRM), code signing or other security mechanisms. These security features aren’t what we normally think of as security: They don’t protect us from some outside threat, they protect the companies from us.

Microsoft has been planning this sort of control-based security mechanism for years. First called Palladium and now NGSCB (Next-Generation Secure Computing Base), the idea is to build a control-based security system into the computing hardware. The details are complicated, but the results range from only allowing a computer to boot from an authorized copy of the OS to prohibiting the user from accessing “unauthorized” files or running unauthorized software. The competitive benefits to Microsoft are enormous (.pdf).

Of course, that’s not how Microsoft advertises NGSCB. The company has positioned it as a security measure, protecting users from worms, Trojans and other malware. But control does not equal security; and this sort of control-based security is very difficult to get right, and sometimes makes us more vulnerable to other threats. Perhaps this is why Microsoft is quietly killing NGSCB — we’ve gotten BitLocker, and we might get some other security features down the line — despite the huge investment hardware manufacturers made when incorporating special security hardware into their motherboards.

In my last column, I talked about the security-versus-privacy debate, and how it’s actually a debate about liberty versus control. Here we see the same dynamic, but in a commercial setting. By confusing control and security, companies are able to force control measures that work against our interests by convincing us they are doing it for our own safety.

As for Apple and the iPhone, I don’t know what they’re going to do. On the one hand, there’s this analyst report that claims there are over a million unlocked iPhones, costing Apple between $300 million and $400 million in revenue. On the other hand, Apple is planning to release a software development kit this month, reversing its earlier restriction and allowing third-party vendors to write iPhone applications. Apple will attempt to keep control through a secret application key that will be required by all “official” third-party applications, but of course it’s already been leaked.

And the security arms race goes on …

This essay previously appeared on Wired.com.

EDITED TO ADD (2/12): Slashdot thread.

And critical commentary, which is oddly political:

This isn’t lock-in, it’s called choosing a product that meets your needs. If you don’t want to be tied to a particular phone network, don’t buy an iPhone. If installing third-party applications (between now and the end of February, when officially-sanctioned ones will start to appear) is critically important to you, don’t buy an iPhone.

It’s one thing to grumble about an otherwise tempting device not supporting some feature you would find useful; it’s another entirely to imply that this represents anti-libertarian lock-in. The fact remains, you are free to buy one of the many other devices on the market that existed before there ever was an iPhone.

Actually, lock-in is one of the factors you have to consider when choosing a product to meet your needs. It’s not one thing or the other. And lock-in is certainly not “anti-libertarian.” Lock-in is what you get when you have an unfettered free market competing for customers; it’s libertarian utopia. Government regulations that limit lock-in tactics — something I think would be very good for society — is what’s anti-libertarian.

Here’s a commentary on that previous commentary. This is some good commentary, too.

Posted on February 12, 2008 at 6:08 AMView Comments

How the MPAA Might Enforce Copyright on the Internet

Interesting speculation from Nicholas Weaver:

All that is necessary is that the MPAA or their contractor automatically spiders for torrents. When it finds torrents, it connects to each torrent with manipulated clients. The client would first transfer enough content to verify copyright, and then attempt to map the participants in the Torrent.

Now the MPAA has a “map” of the participants, a graph of all clients of a particular stream. Simply send this as an automated message to the ISP saying “This current graph is bad, block it”. All the ISP has to do is put in a set of short lived (10 minute) router ACLs which block all pairs that cross its network, killing all traffic for that torrent on the ISP’s network. By continuing to spider the Torrent, the MPAA can find new users as they are added and dropped, updating the map to the ISP in near-real-time.

Note that this requires no wiretapping, and nicely minimizes false positives.

Debate on idea here.

Posted on February 11, 2008 at 1:24 PMView Comments

Security vs. Privacy

If there’s a debate that sums up post-9/11 politics, it’s security versus privacy. Which is more important? How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity? Security versus privacy: It’s the battle of the century, or at least its first decade.

In a Jan. 21 New Yorker article, Director of National Intelligence Michael McConnell discusses a proposed plan to monitor all — that’s right, all — internet communications for security purposes, an idea so extreme that the word “Orwellian” feels too mild.

The article (now online here) contains this passage:

In order for cyberspace to be policed, internet activity will have to be closely monitored. Ed Giorgio, who is working with McConnell on the plan, said that would mean giving the government the authority to examine the content of any e-mail, file transfer or Web search. “Google has records that could help in a cyber-investigation,” he said. Giorgio warned me, “We have a saying in this business: ‘Privacy and security are a zero-sum game.'”

I’m sure they have that saying in their business. And it’s precisely why, when people in their business are in charge of government, it becomes a police state. If privacy and security really were a zero-sum game, we would have seen mass immigration into the former East Germany and modern-day China. While it’s true that police states like those have less street crime, no one argues that their citizens are fundamentally more secure.

We’ve been told we have to trade off security and privacy so often — in debates on security versus privacy, writing contests, polls, reasoned essays and political rhetoric — that most of us don’t even question the fundamental dichotomy.

But it’s a false one.

Security and privacy are not opposite ends of a seesaw; you don’t have to accept less of one to get more of the other. Think of a door lock, a burglar alarm and a tall fence. Think of guns, anti-counterfeiting measures on currency and that dumb liquid ban at airports. Security affects privacy only when it’s based on identity, and there are limitations to that sort of approach.

Since 9/11, approximately three things have potentially improved airline security: reinforcing the cockpit doors, passengers realizing they have to fight back and — possibly — sky marshals. Everything else — all the security measures that affect privacy — is just security theater and a waste of effort.

By the same token, many of the anti-privacy “security” measures we’re seeing — national ID cards, warrantless eavesdropping, massive data mining and so on — do little to improve, and in some cases harm, security. And government claims of their success are either wrong, or against fake threats.

The debate isn’t security versus privacy. It’s liberty versus control.

You can see it in comments by government officials: “Privacy no longer can mean anonymity,” says Donald Kerr, principal deputy director of national intelligence. “Instead, it should mean that government and businesses properly safeguard people’s private communications and financial information.” Did you catch that? You’re expected to give up control of your privacy to others, who — presumably — get to decide how much of it you deserve. That’s what loss of liberty looks like.

It should be no surprise that people choose security over privacy: 51 to 29 percent in a recent poll. Even if you don’t subscribe to Maslow’s hierarchy of needs, it’s obvious that security is more important. Security is vital to survival, not just of people but of every living thing. Privacy is unique to humans, but it’s a social need. It’s vital to personal dignity, to family life, to society — to what makes us uniquely human — but not to survival.

If you set up the false dichotomy, of course people will choose security over privacy — especially if you scare them first. But it’s still a false dichotomy. There is no security without privacy. And liberty requires both security and privacy. The famous quote attributed to Benjamin Franklin reads: “Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety.” It’s also true that those who would give up privacy for security are likely to end up with neither.

This essay originally appeared on Wired.com.

Posted on January 29, 2008 at 5:21 AMView Comments

Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace — ­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­ — 50 million people­ — was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet — and the computers and processes connected to it — ­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­ — banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective — and low-risk — for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security — but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services — it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system — they may even have convinced themselves it will improve security — but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure — ­and there’ll probably have been at least one real disaster by then — that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build — ­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems — and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing — or other forms of making security someone else’s problem — will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PMView Comments

Burmese Government Seizing UN Hard Drives

Wow:

Burma’s ruling junta is attempting to seize United Nations computers containing information on opposition activists in the latest stage of its brutal crackdown on pro-democracy demonstrations, The Times has learnt.

[…]

The discs contain information that could help the dictatorship to identify key members of the opposition movement, many of whom have gone underground. UN staff spent much of the weekend deleting information.

Another reason law enforcement’s demands that e-mails be tracable is a bad idea.

Posted on October 9, 2007 at 1:14 PMView Comments

The Economist on Privacy and Surveillance

Great article from The Economist on data collection, privacy, surveillance, and the future.

Here’s the conclusion:

If the erosion of individual privacy began long before 2001, it has accelerated enormously since. And by no means always to bad effect: suicide-bombers, by their very nature, may not be deterred by a CCTV camera (even a talking one), but security wonks say many terrorist plots have been foiled, and lives saved, through increased eavesdropping, computer profiling and “sneak and peek” searches. But at what cost to civil liberties?

Privacy is a modern “right.” It is not even mentioned in the 18th-century revolutionaries’ list of demands. Indeed, it was not explicitly enshrined in international human-rights laws and treaties until after the second world war. Few people outside the civil-liberties community seem to be really worried about its loss now.

That may be because electronic surveillance has not yet had a big impact on most people’s lives, other than (usually) making it easier to deal with officialdom. But with the collection and centralisation of such vast amounts of data, the potential for abuse is huge and the safeguards paltry.

Ross Anderson, a professor at Cambridge University in Britain, has compared the present situation to a “boiled frog” — which fails to jump out of the saucepan as the water gradually heats. If liberty is eroded slowly, people will get used to it. He added a caveat: it was possible the invasion of privacy would reach a critical mass and prompt a revolt.

If there is not much sign of that in Western democracies, this may be because most people rightly or wrongly trust their own authorities to fight the good fight against terrorism, and avoid abusing the data they possess. The prospect is much scarier in countries like Russia and China, which have embraced capitalist technology and the information revolution without entirely exorcising the ethos of an authoritarian state where dissent, however peaceful, is closely monitored.

On the face of things, the information age renders impossible an old-fashioned, file-collecting dictatorship, based on a state monopoly of communications. But imagine what sort of state may emerge as the best brains of a secret police force — a force whose house culture treats all dissent as dangerous — perfect the art of gathering and using information on massive computer banks, not yellowing paper.

Posted on October 2, 2007 at 11:14 AMView Comments

Chinese National Firewall Isn't All that Effective

Interesting research:

The study, carried out by graduate student Earl Barr and colleagues in the computer science department of UC Davis and the University of New Mexico, exploited the workings of the Chinese firewall to investigate its effectiveness.

Unlike many other nations Chinese authorities do not simply block webpages that discuss banned subjects such as the Tiananmen Square massacre.

Instead the technology deployed by the Chinese government scans data flowing across its section of the net for banned words or web addresses.

When the filtering system spots a banned term it sends instructions to the source server and destination PC to stop the flow of data.

Mr Barr and colleagues manipulated this to see how far inside China’s net, messages containing banned terms could reach before the shut down instructions were sent.

The team used words taken from the Chinese version of Wikipedia to load the data streams then despatched into China’s network. If a data stream was stopped a technique known as “latent semantic analysis” was used to find related words to see if they too were blocked.

The researchers found that the blocking did not happen at the edge of China’s network but often was done when the packets of loaded data had penetrated deep inside.

Blocked were terms related to the Falun Gong movement, Tiananmen Square protest groups, Nazi Germany and democracy.

On about 28% of the paths into China’s net tested by the researchers, blocking failed altogether suggesting that web users would browse unencumbered at least some of the time.

Filtering and blocking was “particularly erratic” when lots of China’s web users were online, said the researchers.

Another article.

Posted on September 14, 2007 at 7:52 AMView Comments

Australian Porn Filter Cracked

The headline is all you need to know:

Teen cracks AU$84 million porn filter in 30 minutes

(AU$84 million is $69.5 million U.S.; that’s real money.)

Remember that the issue isn’t that one smart kid can circumvent the censorship software, it’s that one smart kid — maybe this one, maybe another one — can write a piece of shareware that allows everyone to circumvent the censorship software.

It’s the same with DRM; technical measures just aren’t going to work.

Posted on August 30, 2007 at 12:50 PMView Comments

Ubiquity of Communication

Read this essay by Randy Farmer, a pioneer of virtual online worlds, explaining something called Disney’s ToonTown.

Designers of online worlds for children wanted to severely restrict the communication that users could have with each other, lest somebody say something that’s inappropriate for children to hear.

Randy discusses various approaches to this problem that were tried over the years. The ToonTown solution was to restrict users to something called “Speedchat,” a menu of pre-constructed sentences, all innocuous. They also gave users the ability to conduct unrestricted conversations with each other, provided they both knew a secret code string. The designers presumed the code strings would be passed only to people a user knew in real life, perhaps on a school playground or among neighbors.

Users found ways to pass code strings to strangers anyway. This page describes several protocols, using gestures, canned sentences, or movement of objects in the game.

After you read the ways above to make secret friends, look here. Another way to make secret friends with toons you don’t know is to form letters/numbers with the picture frames in your house. Around you may see toons who have alot of picture frames at their toon estates, they are usually looking for secret friends. This is how to do it! So, lets say you wanted to make secret friends with a toon named Lily. Your “pretend” secret friend code is 4yt 56s.

  • You: *Move frames around in house to form a 4.* “Okay.”
  • Her: “Okay.” She has now written the first letter down on a piece of paper.
  • You: *Move Frames around to form a y.* “Okay.”
  • Her: “Okay.” She has now written the second number down on paper.
  • You: *Move Frames around in house to form a t* “Okay.”
  • Her: “Okay.” She has now written the third letter down on paper. “Okay.”
  • You: *Do nothing* “Okay” This shows that you have made a space.
  • Repeat process

Randy writes: “By hook, or by crook, customers will always find a way to connect with each other.”

Posted on June 20, 2007 at 12:48 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.