Schneier on Security
A blog covering security and security technology.
« Power and the Internet |
| Jared Diamond on Common Risks »
January 31, 2013
The Eavesdropping System in Your Computer
Dan Farmer has an interesting paper (long version here; short version here) discussing the Baseboard Management Controller on your computer's motherboard:
The BMC is an embedded computer found on most server motherboards made in the last 10 or 15 years. Often running Linux, the BMC's CPU, memory, storage, and network run independently. It runs Intel's IPMI out-of-band systems management protocol alongside network services (web, telnet, VNC, SMTP, etc.) to help manage, debug, monitor, reboot, and roll out servers, virtual systems, and supercomputers. Vendors frequently add features and rebrand OEM'd BMCs: Dell has iDRAC, Hewlett Packard iLO, IBM calls theirs IMM2, etc. It is popular because it helps raise efficiency and lower costs associated with availability, personnel, scaling, power, cooling, and more.
To do its magic, the BMC has near complete control over the server's hardware: the IPMI specification says that it can have "full access to system memory and I/O space." Designed to operate when the bits hit the fan, it continues to run even if the server is powered down. Activity on the BMC is essentially invisible unless you have a good hardware hacker on your side or have cracked root on the embedded operating system.
What's the problem?
Servers are usually managed in large groups, which may have thousands or even hundreds of thousands of computers. Each group typically has one or two reusable and closely guarded passwords; if you know the password, you control all the servers in the group. Passwords can remain unchanged for a long time -- often years -- not only because it is very difficult to manage or modify, but also due to the near impossibility of auditing or verifying change. And due to the spec, the password is stored in clear text on the BMC.
IPMI network traffic is usually restricted to a VLAN or management network, but if an attacker has management access to a server she'll be able to communicate to its BMC and possibly unprotected private networks. If the BMC itself is compromised, it is possible to recover the IPMI password as well. In that bleak event all bets and gloves are off.
BMC vulnerabilities are difficult to manage since they are so low level and vendor pervasive. At times, problems originate in the OEM firmware, not the server vendor, adding uncertainty as to what is actually at risk. You can't apply fixes yourself since BMCs will only run signed and proprietary flash images. I found an undocumented way of gaining root shell access on a major vendor's BMC and another giving out-of-the box root shell via SSH. Who knows what's on other BMCs, and who is putting what where? I'll note that most BMCs are designed or manufactured in China.
Basically, it's a perfect spying platform. You can't control it. You can't patch it. It can completely control your computer's hardware and software. And its purpose is remote monitoring.
At the very least, we need to be able to look into these devices and see what's running on them.
I'm amazed we haven't seen any talk about this before now.
EDITED TO ADD (1/31): Correction -- these chips are on server motherboards, not on PCs or other consumer devices.
Posted on January 31, 2013 at 1:28 PM
• 44 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Hmm. How are those BMCs accessed? Do they have their own IP? Take over some port from the OS? Or wait for some secret protocol? Have their own port and connection?
If someone 20 or 30 years ago had written a science fiction story about something like this happening in the future, they would have been laughed at. Nobody could ever be so stupid as to allow a situation like this to happen! Yet here we are.
The conclusion that I've reached is that humans, for all their self-advertised intelligence, are basically stupid beasts who do whatever seems convenient and easy at the moment. They have no ability to envision future possibilities in any real sense, and will not change their behavior to avoid disaster. Instead, they adapt to the consequences when the time comes.
@EPh - BMC has its own IP address. Ethernet port can be dedicated or shared with OS.
On my MacBook:
bash-3.2# networksetup -showBMCSettings
Unable to determine if BMC is supported - error 0xFFFEF92D.
** Error: BMC is not supported on this device.
I work with these facilities quite a bit as a server administrator. It's not entirely true that these are unable to be patched.
Working with Sun(now Oracle), HP, Dell, IBM server hardware, we've had to patch and in fact did patch these controllers consistently, most often due to bugs, or secondly to pick up new features that we wanted to use to help us with either performance (Patches usually also contain BIOS firmware upgrades) or to help keep them managed efficiently.
For the big guys at least, they are able to be patched, but admittedly, it can be a pain in the butt, and I don't know how many actually do it outside of my own practices, but they are certainly possible to be patched to fix any bugs they might be found to contain. I'm sure this varies by hardware vendor, or perhaps age of systems, so there is some risk there too.
Anton Yuzhaninov has it right. And there are a few things you can do to mitigate this risk.
a. don't plug in the management port if you don't want to use it.
b. restrict access to the VLAN (or physical switch) that you connect it to if you do connect it.
c. restrict access to the IP addresses you assign to them if they are connected.
After that, if someone gets access to them you'll have to fall back on strong passwords and staying current with the patches. Fortunately, the patches are usually easy to apply and do not require rebooting the server since they are are on a sub-system of that server.
EPh: They have their own ethernet port on the box. Smart money is to put them on their own (unrouted) VLAN and behind a firewall.
@Kryai - as with other closed source products you are depend on vendor. If vendor don't want to ship new BMC firmware version after some bug in code is reported, you servers will be vulnerable.
@EPh: BMCs may have a dedicated ethernet port or share a main system port, depending on a particular system's configuration. They're assigned an IP address, usually in private / nonrouteable space (10.0..0/8, 172.16.0.0/12, or 192.168.0.0/16) that's accessible only within the datacenter LAN for an entity. However access may be forwarded outside (say, to the office LAN where admins are located), or made available through a bastion host or port-forwarding to specific network ranges or IPs.
You're going to find BMCs of some sort on virtually all server-grade hardware (and few admins would accept hardware that doesn't offer such functionality because of the utility of the remote management provided). Typical uses are to provide a "failsafe" serial-console access to a system, and allow hard/soft reboots. Vendor extensions to IPMI (iDRAC, iLOM, etc.) often rely on additional technologies such as Java (what could possibly go wrong). And there is often a Web interface to at least some functionality. IPMI will generally offer SNMP traps (monitoring facilities), which is a protocol with its own host of security concerns.
In addition to BMCs there are other "under the OS" systems with similar concerns. Virtualization systems such as Xen (used by Amazon for its AWS / EC2 services), VMWare (used by many organizations to virtualize systems and services for managment), and others (VirtualBox, Parallels, KVM, qemu, ...). Boot systems including particularly EUFI, but also bootloaders which are themselves increasingly small operating systems themselves. The guys who sweat security and trust really sweat this stuff.
In addition to BMC on server boards, many laptops now have Intel's AMT which serves a similar purpose. In my experience it's always disabled by default, but it seems like it would be subject to the same security concerns.
It's nothing new really. We've deployed many servers with these functions, and best practice is to have them in their own private & secure VLAN without the ability to connect outwards to anything.
Basic measures, just like protecting ssh and root access to the server OS.
This isn't a new concern. On an individual scale it's akin to having the TPM (Trusted Platform Module) or NIC firmware compromised.
In 2010 whitepapers/security presentations were done on all three threat vectors.
What's odd is not much has been said since.
Of note is that I once found an attempt to infect a Broadcom chipset via it's ASF - In the Wild.
That short version really is short...
In addition to the points that Brandioch Conner noted, checking your log files regularly would help to identify a pending or possible compromise (i.e. HP servers have a logger that tracks who logged or attempted to log into what they call iLO (integrated lights out), the ip, username etc.).
I'm less familiar with Dell but have used their equivalent iDRAC (no idea offhand what that stands for) on the handful of Dells we run but it's similar too.
For the truly paranoid, you could perform a packet capture for traffic destined to the mgmt ip earlier in the stream and log it elsewhere to protect against missing an attacker erasing their tracks (or to correlate logs for integrity, etc. :)
I was gonna say - I *wish* my desktop had IPMI! I would have uses for that.
But no. While it CAN share a NIC with with OS (and Supermicro's do by default, which is very naughty of them especially since the default password for the just-grabbed-a-DHCP-address interface is "ADMIN/ADMIN"), that's configurable, and most of the time accessing IPMI means plugging a new wire into the management port. If you run that to your shared LAN switch, then sure, you've opened the IPMI interface to everyone on the LAN - but the solution is *don't do that*. If it's physically connected only to a physically secure network, then you're fine unless the attacker physically has your server. And if the attacker physically has your server, you're already screwed.
Dell DRAC ~ Dell Remote Access Controller. The DRAC comes in several levels; the systems I maintain have the lowest level version and have their network support disabled, which is typically over a NIC shared with the OS. We run FreeBSD which, by default, doesn't support IPMI, and run the servers in a mode that prevents kernel modules from being loaded once the system is in multi-user. In theory this would prevent the IPMI support modules from being loaded. Also, in theory, communism works.
I've recently concluded that IPMI gives us some additional monitoring capabilities that are useful (e.g., determining system temperature, fan RPMs, etc.), but do not trust the network stack on these devices to be fully hardened and am allowing IPMI access only by root locally on the system. This seems to be a reasonable tradeoff between usability (being able to monitor system health) and security (exposing IPMI capabilities to root).
Biggest problem: some BMCs seem to default to RFC1918 addresses and ARP out periodically, even when their network connectivity is supposedly disabled. Not routing this traffic internally helps mitigate this brain-damaged behavior.
This is the same kind of exposure that network equipment management interfaces have and is mitigated through much the same techniques by segmenting the management traffic so it's not accessible by users. I think this is a threat model that is pretty well understood intuitively and would require some kind of willful misconfiguration to be a problem in most cases.
Actually some "desktop" systems do have BMC chips in them.. it all depends if they are enterprise systems or not. [I have seen laptops respond to BMC scans.. I think it was a class of Dell but it might have been a whitebox laptop.] Because the BMC sits before the OS on the ethernet bus you can send it all kinds of things that you think your local firewall blocks but doesn't.. this allows for script kiddies to have fun at various times.. [they do it for a short time and then go away.] There was a "bug" in one set of BMC where even if you had set its IP address to something unreachable if you sent the proper Xmas tree packet to the main system it would cause the BMC to reset itself and ask for a DHCP address which you could then probe for and get to.
Correction -- these chips are on server motherboards, not on PCs or other consumer devices.
... so far as we know ...
It is all a little late to the party... we have been talking about this quite a lot, there have been talks about AMT (Intel's IPMI-equivalent) and simply nobody cared.
It is far more fashionable to talk about browser exploits than it is to look at hardware.
How about Charlie Miller's Mac battery firmware hack? Or the PCI hacks (amongst others) by John Heasman? eEye's work in about Tigon2 backdoors which was pretty much at the same time as my own on Broadcom NICs? Andrea Barisani and Daniele Bianco on AMT?
That was all 2007 vintage and I don't think we're done yet (I'm not for sure).
It is my opinion this scheme of functionality has some amount of economic origins, stemming from 24x7 systems not necessarily staffed appropriately.
That is to say, an on-call admin using this function can remotely access an otherwise unresponsive system and potentially resolve issue(s). Thus negating the requirement for on-site staff during "off-hours".
calvin? hobbes' computer isn't working any more...
>these chips are on server motherboards, not on PCs or other consumer devices.
Yes they are, they're built into Intel's vPro chipsets, intended for business users. Note that if you want to detect them you can't access them from the same machine the device is in because it performs (primitive) access-control filtering to prevent that (supposedly done for "security" reasons to prevent a local priv-esc by the end user, but it also means you can't easily check whether your own machine has this enabled or not).
People who say "just put the IPMI on its own private network" didn't read the paper, where I explicitly address that. If you compromise the server you can change and talk to the network interface to compromise the BMC. You can also hop networks once that's done and also have complete control of the server. Plus you can get the IPMI password from the BMC from memory or files. Any server that's compromised has to be viewed with extreme suspicion as to the integrity of the BMC. This is true for physical access and other methods as well - do you de-provision your server by shredding the BMC and where it stores the passwords?
RE: patching - again, read the paper. It's not that you can't patch them, its that you can't use *your* patches - a vulnerability found cannot be addressed until the vendor gets around to getting you a fix (if it works at all.) In addition with the heavy use of OEMs (also noted in paper) there are a handful of vendors who supply the major servers means that a problem in the firmware is likely to mean a cross *server vendor* problem that transcends your typical bug. (I'm sorry if conclusions were drawn without context; I put caveats and such on my page that tries to provide context.)
This isn't theoretical, I've gotten root on BMCs, recovered passwords, and try to provide further details in the paper.
I welcome responses (I'll try to check here, flying cross country in the AM.)
I think every point I see in the comments were explicitly addressed in the paper (if not, me know!) I try to illustrate that it is categorically not the same set of problems as it always was thought to be. If I fail feel free to let me know, but I'd ask folks to read my perhaps laborious but hopefully telling arguments that I try to supply with ample support and details before dismissing my claims.
The one pager is meant as a teaser to the larger one - one of primary points I try to drive home is that it's a confluence of a *lot* of different points that individually don't matter, but when taken as a whole is (I claim) a much larger toxic mess. I simply couldn't crush all the details into a page, but wanted to have something one (if convinced or even worried) could hand to a perhaps less technical but business savvy person for a different response.
It also doesn't matter if some of your servers in an IPMI group are "safe" (whatever that means) - if *ANY* are compromised your entire group has the ability to be compromised - and while the BMC has complete and utter control of the server you simply cannot view any activity on the BMC at all.
When I find a backdoor that allows shell access to the BMC from a major vendor (details TBD; I'm try to let them patch it before releasing details) I find it more than troubling - these things are designed for remote control and managing of servers at a very low level - I think it's utter folly to allow this to continue.
Many thanks for bruce for putting the pointers here, and the discussion post-post.
@ dan farmer
"I think every point I see in the comments were explicitly addressed in the paper (if not, me know!)"
Mostly seem to be after a quick read of the HTML paper. Great work on this. I've always suspected the management computers were a security risk. When asked about them for high security networks, I gave people three options:
1. Disable the port physically & use trusted software on the machine to gather statistics.
2. Put a little gateway between the management computer and the network that only allows authenticated traffic.
3. If you trust the networking gear, use it to restrict and monitor access to them.
I don't trust Cisco et al, hence option 2 existing. However, options 1 and 2 may be impractical for groups with very large numbers of servers. Their default will be option 3. Always tradeoffs...
Re "Correction -- these chips are on server motherboards"
That's actually not correct. This functionality is present in modern "business class" notebook chipsets from Intel; for example, QM67 which can be found in e.g. Lenovo T420.
The technology is called AMT, or Active Management Technology. (Also, Wikipedia has a nice summary which is better than Intel's own).
There have been a number of BMC vulnerabilities in the past, for instance:
for HP iLO3/iLO4.
It's a fairly well understood risk in the datacentre worlds, and is why many larger companies are segmenting and controlling the networks these console ports are connected to.
Useful things to know:
* The vendors are fairly responsive to critical vulnerabilities.
* They almost always have a default password (either a vendor default, or the serial number of the machine).
* You can always disable the functionality in the BIOS (though I'm not sure whether this disables power to the BMC chip itself).
@q: Mr Farmer's paper does mention AMT, but it seems it is distinct from IPMI:
"Intel launched a similar effort for personal computers called Active Management Technology ( AMT) that shares many features with IPMI, but while hazardous I don't personally view it to be as threatening as IPMI."
One of the 'additional reading' entries has an analysis on AMT specifically, which on reading doesn't exactly fill me with confidence.
I'm keen to know more about AMT's vulnerabilities, for several reasons:
* There are many more consumer devices.
* Consumer devices definitely don't have the dedicated management port & infrastructure that has been discussed here so far.
* Consumers aren't going to know how to turn it off, even if it can be turned off.
* Consumers definitely aren't going to manage it, which means default configuration every time.
* They can even be accessed via wireless.
Now if you'll excuse me, I'm going to see if my most recent PC, which I know has AMT, can have it turned off...
@Jeff H: I did a talk at Breakpoint about AMT/ME stuff that might be interesting to you. The short story is that while it does have its share of issues it's much more mature than IPMI and Intel took its security very seriously, especially in later versions. For example, it supports SSL and certificate-based/Kerberos authentication, remote boot/KVM requires user consent and so on. Here's a nice post outlining some of the details on how it's configured: http://www.symantec.com/connect/articles/...
Everytime that I update the firmware on a server whether it is IBM, Dell, or HP I always have a new firmware that said vendor has provided fixes for. I think we are bit addicted to the conspiracy theories aren't we? News Flash!! All this crap is built in China. In fact, I don't know of a single vendor that doesn't have a major portion of their production coming right from our friendly communists across the Pacific. This is another false alarm from a conspiracy theorist.
"If you compromise the server you can change and talk to the network interface to compromise the BMC."
But in order to get to there you have to:
a. get onto the server network
b. exploit a vulnerability in a service that gives you root/admin access that is running on a server that is on the server network.
c. then you can get access to the BMC.
But once you have "b" you have lots of options for compromising other systems.
"Often running Linux, the BMC's CPU, memory, storage, and network run independently."
If it's running Linux, and you can't get access to the Linux source code as specifically modified for the BMC on your server's motherboard in order to check for vulnerabilities, your server manufacturer's committing software piracy. Read the GPL and know your rights.
Hmm I'm a little late to this party...
And firstly to those who are saying this is old news etc, yes whilst that is true, it takes time to compile the information test etc, so please Don't Shoot the Messanger. Firstly it's not polite, secondly it's a significant disincentive for others to come forward with similar information.
Now as to this "computer within a computer" in one way or another all PC's have had another computer in them since day one (in the keyboard). Some systems have had limited state machines set up bits of hardware. Almost all standalone modems had a 6502 or equivalent 8bit CPU built in and many hard drive controlers had microcontrolers or bitslice processors built in as have all hard drives in recent times. Likewise most peripherals including every real USB device and as others have noted in bateries and other unexpected but perhaps unsurprising places. And they will continue to appear in more and more places as time goes on and their price drops to cents or less in chips (many real time clock chips are now actually baby microcontrolers).
Both Nick P, RobertT, myself and others have repeatedly said over many years, from the security perspective peripherals are as much if not more of a danger to system security as malware. Primaraly because not only do they run "beneath the OS" in most cases they run "beneath the CPU" and thus control what the Main CPU does or does not see.
As a simple rule of thumb anything that runs beneath the Main CPU cannot be audited by the Main CPU thus any kind of nasty can be on there, and more importantly it can access anything the Main CPU can either directly or by getting the Main CPU to do it. Thus it will always be able to work around the "don't plug in the maintanence port" as will any command and control messages sent to it from any other connected system...
Whilst we might not like this there is little we can actually do about it as we will almost certainly get less attention than the big company wish to have less people do more work. That is their stated aim is to reduce costs by (supposadly) making those lucky few who still have jobs "more efficient".
Now as I'm in the habit of repeatedly saying you have the general case of "Efficiency -v- Security". That is unless you realy realy know what you are doing at all levels then making a system more efficient makes it less secure. The problem is nobody these days knows enough at all levels to know what they are doing so our systems become more insecure with time.
Now as I also say with regular monotony "technology is agnostic to it's use" like a knife it cares not if it cuts you food or your throat, you however do. In the same way in times befor the safety razor you trusted your barber not to "Sweeny Tod you". One way you built up such trust was to go to the barber with a friend untill you both got to know him, then on going on your own you would by way of conversation let the barber know that immediatly after you had been shaved you were meeting a friend who knew you were there. Likewise you almost always let your friends and family know you were going to the barbers and they knew from experiance exactly which one it was. Thus the barber if he had any sense would know you had marked him, so if you disappeared your friends and family would know where to point the finger.
We call this process "building up trust" unfortunatly it does not always work, that is past behaviour is no indicator of future behaviour and Con artists rely on abusing trust because they have planned their exit strategy so they are not there to have fingerrs pointed at them.
And that's tthe point when it comes to all types of cyber-crime the perpetrator generaly aranges so that they are not there to have the finger pointed at them, or be safely in another safe jurisdiction when the authorities come knocking.
For instance Stuxnet and the code signing keys, how did the malware writers get hold of them, and are the people who stole them still around to have fingers pointed and their collars felt? unlikely (unless they were blackmailed etc).
I've always disliked code signing because it is not a security mechanism just an at best attestation method. All it says is that on such and such a date a body of code was hashed, and digitaly signed. Nothing else, thus steal the key, factor it or add malware upstream of signing or find a hash collision the result will be the same properly signed code... not audited code, not tested code, not bug free code or even secure code just signed code...
Thus there is a myth that has been proved repeatedly wrong that "signed code is good code" it's not.
The advantage of code signing is that finding hash collisions, factoring or stealing signing keys and getting code into the developers database should be at best very difficult problems.
But are they? Unfortunatly in many cases they are not. I've been on a walk through of an organisations development process where in many respects they had gone to a lot of trouble to be secure in how they did their code signing via automated processes. Unfortunatly on a little anaylasis it all rested on knowing the name of a senior member of the developer or test teams that had admin rights over the code repository and their remote login password...
Publicaly available information gave the names and thus their predictable usernames and the organisations remote access server was vulnerable to having malware put on it so finding out the passwords via a bit of malware was not going to be difficult... And in one case the password was fairly easily crackable even though it followed the usual rules for security on passwords.
Security involving humans is hard because by and large we trust other people and we can be persuaded to turn by various human failings.
Whilst eliminating many humans from the loop by technical measures is possible you cann't make the systems "perfectly secure" and thus ultimatly there is atleast one human (the admin) kept in the loop, making it less secure etc.
The other solution is of course issolation or air gapping the systems entirely, but again as I worked out and Stuxnet later showed humans breach the air gap as a simple matter of getting their job done. But as others have noted over and over again computers in general are only as of as much use as the other computers they are connected to. Thus to get not just some but most jobs done requires connectivity.
Is there a solution? well no because perfect security does not exist, but there are mitigations. The first is "Eternal Vigilance" that is you realy have to not only monitor and store what comes in and goes out your door. You have to know exactly what is on your systems and where, and have easy but effective ways to verify whats there. You further have to profile the behaviour of the systtems and humans that use them. And these systems are not efficient or easy to use...
At the end of the day "you pay your money and make your choice" and currently those in walnut corridor are chosing short term small gain over longterm stability and more secure income...
I think most people who run lots of servers are very familiar with this. Administration of many machines is involves many trade-offs. These things exist to make machines much easier to remote manage they are the equivalent to the serial consoles unix machines might have attached to a modem pool in the old days.
I think the thing that surprises security people here is that most large computing rooms have a gooey center behind the hard shell. I think this is the real problem. Once an attacker in inside there is little defense against an attacker who understands the 'enterprise crap' they could hop to any other machine.
The problem is not limited to DRAC/Lights-out systems, but its essentially the same for: backup agents, KVMs, SANs (iSCSI, FC, etc...), and "cluster stuff" often are designed with the idea that traffic can only come from a special network on which each node is trusted or forgery is impossible.
The problems here are:
1) Security is too fixated on stopping threats from breaching the outer layer and not enough focus on people hopping between enclaves or even acknowledging the size of the perimeter of the crust vs the volume of gooey center.
2) Often service architects forget these things are there because they often have no control over it. The "systems folks" just put it there. Its not a choice if it is used, just which bad product is used.
It sounds like technology making a full circle again. In the "old days" of mainframes and large minicomputers, they typically had a "front-end" or "console" processor. It would be responsible for loading the initial bootstrap code into memory, debugging, and stuff like that, by being able to access the main computer's memory and having its (own) disk drive and tty.
You can easily tell whether your system has one of these things by watching the BIOS messages on the (console) display when you power the system up. If it's there, it will make itself obvious. We've got a couple of PowerEdge servers at work that have these features. They're not enabled, but they still cause 30+ seconds of delays at firmware boot time. (The RAID controller causes an *additional* 30+ seconds of delays, and there's other stuff too. All of this is before the software boot loader starts. Server hardware takes a lot longer to boot up than consumer hardware.)
@Jonadab: "If it's there, it will make itself obvious."
Nope. Before reading this thread, I would have interpreted "Network IPMI enabled" as an obscure feature, possibly related to bios trying to boot from a variant of ISCSI, thus under my radar.
This is quite an eye opener.. Somewhat reminds me of purpose build "back-doors" in Chinese grey kit!
The desktop version of this is Intel's AMT. This is sold to medium and large enterprises as a desktop management solution.
The BMC on the systems I use has all sorts of things other than IPMI 2.0 included. It has a web server embedded along with a proprietary interface running on another port. There is no documentation on this OS (although it can be seen to be linux based). The vendor seems to fix bugs in the BMC software every couple of months. More recent systems do seem to let you turn off the network interfaces for this other junk.
I have asked the vendor for their security evaluation of the BMC and they are unable to provide it. This vendor has at least one major design problem with the BMC software.
My lab has a couple of older HP/Compaq 1U rack-mounted servers with iLO. They've got separate Ethernet ports, get their own IP addresses, and we've done very little with them, but they look like they'd be useful if we were running a server farm. You can at least talk to the machine and decide whether it's up or down.
Back when the VAX 11/780 was still cutting-edge, it had a PDP-11-on-a-chip for booting, and an 8" floppy disk, and there were drivers for 4.1BSD that could access the disk. Somebody sent us some data on 8" DEC floppies, and after some thought we decided it was ok to use the boot floppy drive to read them; worked ok. Traditionally you'd use a DECwriter paper terminal on the PDP console to manage the system, though eventually we shifted over to CRTs. (Hint: Don't play Rogue on the console; the ^P gets interpreted as a request to talk to the microcontroller, which asks if you want to reboot the system. But at least we never put a modem on that port.)
Recent intel chipsets are full of Intel Management Engine (this appears to be mandatory on all new Intel chipsets), vPRO, AMT and other Intel technologies herein referred to as "intel embedded rootkit" from now on
How can you disable Intel AMT and related technologies on your motherboard, can you simply de-solder the chip that contains the intel embedded rootkit ARC processor (and firmware, private memory etc)
I am seriously thinking of desoldering the intel embedded rootkit chip on my new motherboard... Could this introduce any unforeseen problems?
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.