Schneier on Security
A blog covering security and security technology.
« Software that Assesses Security Risks to Transportation Networks |
| Hacking ISP Error Pages »
April 23, 2008
Reverse-Engineering Exploits from Patches
This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?
Turns out you can.
What does this mean?
Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.
Full paper here.
Posted on April 23, 2008 at 1:35 PM
• 63 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
This is one of those 'duh' moments, actually. Makes perfect sense. I'm not sure why it's not been noticed before. But, of course, it obviously hasn't worked that way thus far, otherwise we'd have been severely crippled far before now. Again a situation of releasing too much information to the enemy, making their job easier - the hardest part of the security battle is trying to think one ahead of them.
I'm not sure whether I get Ewan right, but if the premise is right, that does not only apply for Windows but especially to free software of any kind, simply because the patches and all are open.
Not related to the post, but I thought you might find it interesting to know: we were tested on your blog posts in class the other day.
Still waiting to see how I did :)
That's true, but usually due to the open nature of the software, the biggest vulnerabilities are detected fairly swiftly. The main problem with windows is like having a bucket without being able to see what shape the whole is to make a bung for it - all the guys are doing is look at the the shape of the bung to determine the shape of the hole, and then using the shape of the whole across the entire line of buckets to catch water coming out of them. With open software you can look into the bucket and see the hole, and bung it yourself if necessary, or at least make the decision not to use the bucket until it is repaired.
Sorry for the incredibly poor spelling in my last post - I've had a hard day trying to use Windows to track a security breach. :(
On the same topic as Daniel...
A couple years back a professor gave us single substitution and multiple substitution encrypted Crypto-Gram entries to try and decrypt. Was a rather fun exercise (and interesting to see who went the whole 9 yards writing a program to do different analyses of the text, and who made reasonable guesses as to some small subset of the text and googled to see if they could recover the plaintext), and I ended up becoming a regular reader here.
"but if the premise is right, that does not only apply for Windows but especially to free software of any kind,"
But there is a difference between having the code and getting an exploit to run on my machine.
My Ubuntu workstation (which I am typing from right now) can be directly connected to the Internet and STILL BE SAFE.
That's because a default installation of Ubuntu does not have any open ports.
Which pretty much leaves you with attacking the TCP/IP stack ... or getting me to download and run some app from you.
The problem with Windows is that, by default, LOTS of ports are open. Microsoft just threw a software firewall on top of them.
The OS vendors CANNOT depend upon the customers keeping their systems up-to-date all the time.
i hope they come out with a patch real soon to fix this vulnerability...
What part of "diff" do you not understand?
Unless you start encrypting the libraries, you are not going to be able to prevent this. (And there are ways around that as well.)
This has been known for at least 5 years, if not longer. In my opinion, it is just an excuse to not release patches. It makes the stupid assumption that the Black Hats do not already know about the vulnerability.
If you make the assumption that you are smarter than everyone else in the field, you will find out the hard way that you are not.
For Microsoft and other vendors, it's not a matter of assuming they're smarter than everyone else, it's knowing that they have the advantage of the source code and developers looking at it as opposed to just having to attack the program without the help of the source code. Most worms and viruses spread through already patched holes, so releasing a patch for an unexploited hole guarantees that the hole will be exploited in the future.
However, I believe it's a mistake to not put out patches because some people won't install the patch you send out.
Similar research was presented at BlackHat this year, and information about disclosure requirements et al. discussed at DEFCON (I admit it might be the SAME research in fact, I'm bad with names)
The main issue seems to be third-parties that get a hold of these exploits and use them in their security products. The "0day" can be extracted from one vendor's product and reused against sites using a different vendor which would not have the protection in place (but which would have the bug).
/salutes Captain Obvious
I read the first part of the paper. One issue is that they declare success too easily. If a patch adds an input validation check, they declare success if they produce an input that triggers the validation check. However, that alone does not suffice for an exploit. If it's a buffer overflow, they have to somehow produce an overflow that makes the program take some desired action. While that's often possible for a human cracker, they haven't demonstrated the automatic production of that kind of exploit.
Consider this example: their tool detects that a patch adds a new check to a program, that an input is 3 or less, and the tool happily prints out that the input "4" and calls it an exploit. But what the tool didn't understand that this is DRM code, and previously the user was allowed to ask for four copies of a song and now she is limited to 3. Asking the old version for 4 copies is not an exploit.
"What if a vulnerability is found in Netfilter?"
No ports open means NO ports open.
Therefore, netfilter would not be needed.
I am not running netfilter. Therefore, a flaw in netfilter would NOT affect me.
You only need a firewall (which netfilter is) when you have an open port.
Ubuntu got it right.
Microsoft continues to get it wrong.
This seems to be an argument to close the window on the average amount of time it takes to distribute and install a patch. Perhaps this could be addressed to some extent by releasing and distributing all patches encrypted, waiting until you've gotten to some sort of threshold of distribution, then releasing the keys for immediate installation.
Not completely thought out, obviously there are other issues like IT departments wanting to test patches for some period before releasing them...
Yes, yes. Of course they could but full disclosure seems to work better than security through obscurity. In theory they could do this but in practice it doesn't seem to work that way.
Was it the Code Red worm that took advantage of an exploit MS had patched a year (or more?) before, and it ran nuts. Because guess what... the problem isn't 0 day attacks... it's that incompetent admins don't patch there servers, but that's done because incompetent developers sometimes introduce new bugs when fixing security issues.
Anyways full disclosure obviously works better than security through obsfucation. The complete lack of released security patches hasn't prevented people from writing attacks against windows. But the frequent patches and source code of open source seems to have done a fair job keeping it safe.
@Brandioch Linux's IP stack may have vulnerabilities
@Bryan he was talking about the IP stack, I don't believe ubuntu has iptables enabled by default.
The reason for open disclosure was as a big stick to get companies to patch quickly or at all.
Some companies (not just Microsoft, but some others as well) handled vulnerabilities, not as a security issue, but as a PR problem. You would report a vulnerability and they would claim that there was no problem or "it only affects a small number of users". You had to threaten open disclosure (with exploit) just to get them to fix the problem.
Now that we have the DHS involved with security, I am afraid we are back to the "Security as PR" problem. (If we do not know about the vulnerability, then it does not exist.)
It seems some of the posters above are a bit confused -- the "patches" the researchers are referring to seem clearly to be talking about binary diffs of executables -- very different than a source code patch to a piece of free or open source software. Obviously, the source code patch ALREADY lets you know what the exploit is that they are trying to fix. Whereas a patched executable is trying to HIDE it. The point of the research is that now we know that it doesn't work.
The operating system you use may not have any open ports, but that simply means that you are not vulnerable to attacks connect to a service listening on your system. There is no shortage of attacks that target client software that is used to connect to the internet.
If you are using the internet with any common tools to do tasks such as browsing the web, sending or recieving emails, uploading and downloading files (p2p or otherwise), or in any way sharing data with other computers, there is a good chance that there are vulnerabilities in the applications you are using.
You are not safe just because you refuse to allow attacks to initiate connections, and an attacker does not need to "trick" you into downloading and installing a program, they merely need to get you to access content they control via a client with a vulnerability they are aware of.
The other big problem is that illegal installations of windows will not get that hole plugged no matter what. That in turn becomes a problem to everyone else. When you are looking at open source then the updates are free so the holes should/do get plugged everywhere faster.
"The operating system you use may not have any open ports, but that simply means that you are not vulnerable to attacks connect to a service listening on your system."
And what had I said on that subject?
"Which pretty much leaves you with attacking the TCP/IP stack ... or getting me to download and run some app from you."
So you comment:
"If you are using the internet with any common tools to do tasks such as browsing the web, sending or recieving emails, uploading and downloading files (p2p or otherwise), or in any way sharing data with other computers, there is a good chance that there are vulnerabilities in the applications you are using."
So, my challenge to YOU is to crack my machine. It's running the current beta of Hardy Heron. Go on. Crack it. I'll wait.
Oh, is there a flaw in your argument? Does your entire argument fail because YOU cannot convince ME to run YOUR app on MY machine?
"You are not safe just because you refuse to allow attacks to initiate connections, and an attacker does not need to "trick" you into downloading and installing a program, they merely need to get you to access content they control via a client with a vulnerability they are aware of."
Yeah, I can see why you choose a 'nym such as "havvok". And that is not a compliment.
The Internet is all about downloading. That's what your computer just DID to allow you to read this post.
Go ahead. Show that you know what you're talking about. Crack my machine. I will say right here and now that you cannot. You will be unable to do what you claim is so easy.
Anyway, on to the subject of patches and patching.
If you're running Windows, do some digging and find a sub-system / app that Microsoft had released a patch to. Make sure it replaces one of the binaries on your system.
So, install said item ... then go to Microsoft's update site and have Windows patch itself.
So now you've patched your system, right? Check the binary now.
Now uninstall that item. And re-install it. Go back to Microsoft's site. Did it re-patch your system?
Check the binary. Is it the patched version?
If not, why not?
On Ubuntu, as long as you're installing / un-installing through their package management system, you'll have the latest version that your machine has seen.
that ubuntu system sounds stupid if the latest version is faulty. i want control over the version of the installed software, if as you imply this is impossible on that system i don't want it.
getting someone to browse a website loaded to exploit a vulnerability in your system, may not be easy, if all you have is a semi anonymous posting on a forum you do not have complete control over.
it's a totally different matter if you do control it.
to state the obvious, an attacker gets to know some data which may at least hint what kind of setup is used. which in turn can help infect a system.
anyway, there is too much code around to safely assume none of it can be exploited.
which in the end means, a false sense of security is worse than the realization of this fact.
back to topic:
in regard to encrypted patches, all an attacker needs are the pre-patch binary and the post patch binary, any encryption on part of the patch itself is superflous.
code obfuscation may sound spiffy, but in itself can lead to more security holes.
the best way to secure code is to make it as clean as possible.
the paper mentioned peer to peer patch distribution, an absurd concept, from a security standpoint.
it may save the bigshots some pocket change, but nobody in his right mind would want to download patches from ever changing ips and distributors.
there are mechanisms to try to ensure patch integrity, but i reckon that system itself be a huge nightmarish security hole.
if ever implemented by ms, i'd nick it the largest botnet on earth.
so whoever favors such viral patch distribution, please, stop this folly and entertain yourself with other less dangerous sandcastles. this notion needs to die fast.
aimed at whoever fathered the idea of p2p patch distribution.
@Fred F: "The other big problem is that illegal installations of windows will not get that hole plugged no matter what."
Now you and I both know that's not true. It's also been discussed here by Bruce a number of times.
Non-English windows versions without valid licenes are not so easily patched. Windizupdate for example does not update them. For non-technical users that means they will not update their systems.
Right, this is one of those simplicity genius moments. Just I'm not that sure this wasn't noticed before. Probably it was. Probably it was exploited too :)
"For non-technical users that means they will not update their systems."
non-technical users mostly do not update their systems anyway. On the other hand, at least some of them have "technical" friends who may advice where to download and what to download so that you would be able update. And even to validate against MS' windows legality validator (whatever is it called properly) so that you could download MS antispyware, et cetera.
But that rises another issue: piracy is not safe per se. Downloading illegal copy rises risks of downloading additional "-wares" along with it.
@Brandioch: "That's because a default installation of Ubuntu does not have any open ports."
Which is wrong, btw., as Ubuntu opens ports for dhclient and avahi in a default install.
FWIW, I think it's a good idea to have these exceptions to the "no open ports" rule (btw., another temporarily open port is required to receive DNS responses).
@Sebastian: "Non-English windows versions without valid licenes are not so easily patched."
I was always under the impression that downloading security patches would go without any license checking.
For example, when reinstalling, I always do the first patch sequence without activation.
"so whoever favors such viral patch distribution, please, stop this folly and entertain yourself with other less dangerous sandcastles. this notion needs to die fast..."
Dude, you should get this comment to Ubuntu quickly, they are actually distributing the OS CD using torrents.. Imagine that!!
Oh wait! Possibly it is because the .torrent file contains SHA1 hash values of each segment which the client downloading verifies against what is received..
So if you have got the .torrent file from the 'source' e.g ubuntu.com you needn't worry about someone putting out hacked versions on p2p...
Unless your ignorant hallucinations extend to hacking the .torrent file hosting site/ SHA1 collision attacks...
how about not relying on one layer of security?
if you just rely on patching from your vendor you will have problems no mater how quickly they release patching and or exploits for the vuln. Good security is multi layered simply because no one solution or method is fool proof.
Security has to be designed in from the beginning. This research is another demonstration that security can not be implemented in patches.
using patches to find the bugs is old. I know this has been in the used wild.. for a while.
If Hardy Heron is invulnerable to attack, what do you think the hardy-security repos is for?
I'll tell you: it's for distributing fixes for vulnerabilities in Hardy and its applications, as they are discovered.
Security researchers and/or developers and/or hackers will discover these flaws. They may already have done so, although perhaps you're currently in a sweet window where Hardy is so recent that nobody has yet produced a working exploit. If so, that won't last long.
The probability of someone figuring out how to hack a default-config Hardy Heron box 1. It will happen. Even so, the probability of your box being hacked is pretty low. It will be even lower if you accept that it is not invulnerable, and take critical security patches as and when they're offered.
The article is interesting, but the suggested solutions are ineffective.
Once again, the only viable way is the "infrastructural" one.
Hopefully, in the near future, more and more software will become declarative. In that way, the vendor develops a minimal "framework" that factorize or generalize the execution logic of most applications, and let the programmer create its own "solutions" in a declarative way, as a sort of "specialized instance" of the framework. I'm speaking about real things, like .Net 3.0 (WPF, WCF, WF), Glade, and the like.
Now, we can still expect patches, but the "convergence time" for effective security will be considerably lower, because the patches will apply only on the framework (the infrastructure) and less frequently on the real applications.
I couldn't agree more.
The fact that something like 80% of the bugs are buffer overflows from C wonderful standard library s and memory management model should be lesson enough that perhaps better programming tools are needed.
Very interesting finding - showing how an answer to a problem might sometimes be THE problem.
There is a scenario in which the problem is obviously completely avoided: centralized,web-based applications running in a browser.
Google docs & spreadsheets for example, could be patched in a millisecond by google and does not suffer from this problem, as the patch doesn't need to be published or sent to the users.
Unfortunately, this will solve only 1% of the problem: the other 99% (Internet Explorer) still needs to be addressed. But for that my personal suggestion is to use Firefox... ;-)
Time wil say if the new Microsoft programming paradigms, like Occasionally Connected Applications, Smart Clients or other client-based models will prevail over Google strategy.
Badly designed/implemented software has bad holes to plug .. and whether you can see the hole after the plug depends upon the skill of the plugger and not on the tools used to plug the hole.
"I can't hack your Hardy box myself, because I don't know what vulnerabilities there are."
The code is 100% available.
Maybe your problem is that you do not understand what "security" is.
"But I guarantee ... blah blah blah ... you visit."
I'm seeing a lot of claims. But nothing backing up those claims. You really don't understand this "security" thing, do you?
"Security researchers and/or developers and/or hackers will discover these flaws."
There are patches for Hardy almost every single day. Yet my system is still un-cracked. Someone should explain this "security" thing to you.
"Even so, the probability of your box being hacked is pretty low."
Why is it "pretty low"? What makes it "pretty low"? Explain how it could be "pretty low" when I can put an unpatched WinXP machine on the Internet and have it cracked within 5 minutes. Automatically.
People on this forum tend to be security professionals. Your posts in this discussion are ... less than professional.
Yes, it's great that the latest Ubuntu doesn't listen on any ports by default. That does close one avenue of attack, but it's hardly the only one, especially for a desktop machine.
Running an unpopular desktop OS is also a wonderful defense, as no one bothers to serve malware for it. My home Windows box has been malware free, despite several years of my being quite sloppy in applying patches, thanks simply to running a 64-bit version.
But to claim that you're "safe" simply because you have no open ports is to ignore many popular malware vectors. "Safer than Windows XP" is a credible claim, but not a very interesting one.
"Yes, it's great that the latest Ubuntu doesn't listen on any ports by default. That does close one avenue of attack, but it's hardly the only one, especially for a desktop machine."
I guess your definition of "professional" does not include "competent".
My definition of "security" is "the process of identifying threats and reducing their effectiveness".
So, closing off an entire avenue of attacks WOULD meet that definition.
"Running an unpopular desktop OS is also a wonderful defense, ..."
No. It is not. The Internet-based attacks are automated and randomly scan blocks of addresses. If you are vulnerable, you WILL be cracked. In time.
"My home Windows box has been malware free, despite several years of my being quite sloppy in applying patches, thanks simply to running a 64-bit version."
So you are running a version of Windows ... without regular patching ... without a firewall ... directly connected to the Internet. That is what you are implying.
What was that you said about "professional"? Like I said, I don't see "competent" in your usage of "professional".
"But to claim that you're "safe" simply because you have no open ports is to ignore many popular malware vectors."
That's nice. Uninformed on your part, but nice. Maybe you aren't aware of Ubuntu's user limitations (by default).
Again, prove that what you say is correct. Crack my machine. Otherwise all I'm seeing is another uninformed Internet troll trying to claim that "security" does not exist.
The code is 100% available to you and you STILL cannot crack my machine.
That is because SECURITY is about REDUCING the effectiveness of the potential threats.
And Ubuntu, by default, is secured such that the threats YOU present are ineffective.
@Joe Buck: "However, that alone does not suffice for an exploit. If it's a buffer overflow, they have to somehow produce an overflow that makes the program take some desired action. While that's often possible for a human cracker, they haven't demonstrated the automatic production of that kind of exploit."
That's why the very popular Metasploit has a framework where you can mix and match - once you have an exploit for a flaw (such as your buffer overflow), you add it to the framework, and it's "Chinese Menu" time - pick an exploit from Column A, and a payload from Column B.
Bruce Schneier does not have a Ph.D. degree.
@Brandioch:"The code is 100% available. Maybe your problem is that you do not understand what "security" is."
No, my 'problem' (if it really is a problem) is that I'm not inclined to spend my time looking for vulnerabilities in Ubuntu. I have other code to be bug-checking.
You seem to believe that the reason I have not cracked your personal machine has something to do with the security of the OS you run.
I assure you that it does not. You could be running unpatched WinXP with all services turned on and an animated dancing bear on your website singing a "please crack me" song, and I still wouldn't crack you.
So your continued state of uncracked-by-me, proves nothing. It is a red herring. And you know it.
However, there was a guy on an earlier post on this blog, saying that given sufficient resources he could crack anyone. For reasons which weren't entirely clear to me, he felt that for it to be a fair test, the victim had to fund the attack, although I don't believe he was a professional pen tester. Still, maybe you and he should link up, because you can't both be right.
"No, my 'problem' (if it really is a problem) is that I'm not inclined to spend my time looking for vulnerabilities in Ubuntu."
Ah, the time honored "I'd do it but I'm too busy right now" claim.
"You seem to believe that the reason I have not cracked your personal machine has something to do with the security of the OS you run."
There is no "seem to believe". That is exactly what I have stated.
"I assure you that it does not."
Wow. The assurance of some anonymous posting on an Internet forum. From someone who does not even understand what "security" is. I'll take that to the bank.
"So your continued state of uncracked-by-me, proves nothing."
If it takes a million years to crack a message encrypted with blowfish (of a sufficient key length), then that message is "secure" from that attack.
So your claim about "not having time" only supports my position. But that is because you do not understand "security".
Time + Effort == Security
Automated attacks will crack an unpatched WinXP box on the Internet in 5 minutes. That is NOT "secure".
I run CP/M here and I challenge you to hack it. Catch my drift?
I've been following Bruce's blog for quite a while and it's been very nice so far. You really want to ruin it? please post such inflammatory on usenet, some forum or anywhere else you like. Thank you.
"I run CP/M here and I challenge you to hack it. Catch my drift?"
Post the IP address. Catch MY drift?
Oh, did you not understand the conversation? How ... typical. Maybe when you learn a little bit about "security" you'll be able to contribute something productive.
Or did you skip reading the article that Bruce so kindly linked to?
Did you manage to miss the part about how it discussed exploits derived from patches?
@Brandioch: "Post the IP address"
Heh. I see that the rules change if anyone shows even rhetorical interest in your game. Your machine is proven to be "secure" by the indisputable fact I haven't cracked it, remember? I don't remember you posting your IP address to make your "proof" work. Ever heard of Calvin-ball?
"If it takes a million years to crack a message encrypted with blowfish (of a sufficient key length), then that message is "secure" from that attack. So your claim about "not having time" only supports my position"
If you actually think that my claim about me 'not having time' to hack your box is comparable to me 'not having time' to brute-force blowfish, then it's starting to become clear how it is that in your private universe, you're the only person who understands security.
To recap: You're probably right that I don't have time to crack your box. I claim that in any case I do not in fact have the inclination to crack your box. You think I'm maybe lying about that, because I'm anonymous, which makes a huge pile of no sense: if we actually knew who each other were, then I might conceivably think there's something on your box worth having. But we don't, so I don't.
Anyway, let's just agree that for some reason, I don't spend my time cracking either Ubuntu systems in general, or yours in particular.
I also don't spend my time brute-forcing blowfish. However, unlike brute-forcing blowfish, other people do spend their time successfully cracking Ubuntu. Principally Ubuntu developers and security researchers. Spotted the difference from blowfish yet?
The reason your machine isn't cracked (although I note that the only evidence I have of this is the assurance of some anonymous posting on an Internet forum, so for all I can "bank", you were rooted long ago) is that it has nothing on it of sufficient value to cause any of the numerous people capable of cracking it, to choose to do so. Time and effort have very little to do with it, since the Ubuntu release team could crack your box with minuscule expenditure of either. What it would cost them is the risk of being caught. What they'd gain is nothing of any value to them. That's "understanding security".
Sure, your machine is secure for its purpose - I never said otherwise. What I said, is that your security has little or nothing to do with whether you have open ports or not. You could run dozens of services on Ubuntu and it still be as secure as you need to be. You could somehow become the target of an unscrupulous Ubuntu expert or insider, in which case your machine would no longer be secure even with no ports open.
Anyhow, your box has a rather different state of security, from the state of blowfish that (as far as we know or suspect, at the moment and for the foreseeable future with known computing methods) there does not exist anybody in the world who can brute-force it.
Spotted the difference from blowfish yet? It's that difference which is why we don't liken crypto algorithms to operating systems. And why over here in the real world, we don't assume that, just because some anonymous person who doesn't know or care who you are, does not deign to hack your box, that your box must therefore be "secure" (whatever you think that means).
"Time + Effort == Security"
Ah, that's what you think it means. Since the SI units of time are seconds, and since I assume you wouldn't quote an equation that wasn't physically consistent, I guess the dimensions of security are seconds too. You're right, I don't understand security - I thought the SI unit was the schneier (symbol Sn). "Block port 80 inbound and we'll have ourselves a 15 kSn box here", we say around the office.
"Automated attacks will crack an unpatched WinXP box on the Internet in 5 minutes. That is NOT "secure"."
Agreed. That's, like, a few mSn at best, or equivalently 157s of time plus half a "round tuit" of effort.
So if unpatched WinXP is "not secure", and your machine avoids one of its most egregious problems, does this again "prove" the uncrackability of your machine? Why do you need two proofs - you already have that I haven't hacked it?
In any case, the fact that open ports expose exploitable flaws in unpatched WinXP says nothing about whether ports need to be closed in Ubuntu. It's good practice to keep the attack surface to a minimum, and hence not to run unnecessary services, and Windows violates that (or arguably it has a funny idea of "necessary"). Why does this make you think that not running services is somehow a critical part of Ubuntu's security? Do you think that if Ubuntu by default ran an echo server on port 7, that it would no longer be "secure", because it has an open inbound port?
@Kilgore: "You really want to ruin it?"
He's not ruining it, is he? There's discussion above of Bruce's post, plus a couple of us are bear-baiting. If anyone else doesn't like it, I'll happily stop: if we don't actively disagree, Brandioch will just assume that means everyone thinks he's right.
Funny thing - brings me back to my systems programmer days. Now, sometimes we had to reverse engineer patches just to know what it will do in our 7x24 system. For "security" reasons the (blue) vendor didn't give enough information. Later on when some vendors started putting "kill switches" to products we sold - it was a must. Fun but sometimes painful if in a word based, octal machine, I take blue any day.
So, it is possible and sometimes even needed. And of course you learn a lot what patches are trying to do. And if it gets too painful, you write tools for it.
"Heh. I see that the rules change if anyone shows even rhetorical interest in your game."
Claim that they've changed. They have not.
"Your machine is proven to be "secure" by the indisputable fact I haven't cracked it, remember?"
Nope. It is "secure" because of the design of the system. PART of that design is reducing the avenues of attack on default installations.
You claimed that it wasn't.
I told you to prove your claim by cracking my machine.
You failed to do so. Instead you keep finding excuses for not being able to do so.
Anyone can claim anything on an Internet forum.
But I can demonstrate an uncracked Ubuntu machine.
And Kilgore can demonstrate an uncracked CP/M machine. Your analysis is weak.
Three-post maximum for each article for every IP address please Bruce?
(Don't feed the trolls, people...)
My Ubuntu workstation (which I am typing from right now) can be directly connected to the Internet and STILL BE SAFE.
"And Kilgore can demonstrate an uncracked CP/M machine. Your analysis is weak."
If you understood security, you would know the term "air gap". All Kilgore did was accidentally stumble across the concept while he was trying to be amusing.
And if you understood security you would have understood my reply to him about the error he made. But since you do not, I'll explain it.
He was comparing a system with no TCP/IP stack and no network connectivity (the complete system, NOT the operating system)
An operating system with full Internet connectivity.
And you thought that his statement was profound and insightful.
"Air gap" is a step below "firewall" which is two steps below a default Ubuntu machine's ability to post here securely.
This issue is not new, one year ago i was dealing with an exploit that enters the system restore, and when i tried to restore, whole thing was messed up. Not exactly the same thing but somewhat related..
Actually we should all stop using computers. They are too unsafe...
Check this out -Oracle critical patch updates can easily be used to explore vulnerabilities and also find new ones.
"Air gap" is a step below "firewall" which is two steps below a default Ubuntu machine's ability to post here securely.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.