Hacking Fitbit

This is impressive:

"An attacker sends an infected packet to a fitness tracker nearby at bluetooth distance then the rest of the attack occurs by itself, without any special need for the attacker being near," Apvrille says.

"[When] the victim wishes to synchronise his or her fitness data with FitBit servers to update their profile ... the fitness tracker responds to the query, but in addition to the standard message, the response is tainted with the infected code.

"From there, it can deliver a specific malicious payload on the laptop, that is, start a backdoor, or have the machine crash [and] can propagate the infection to other trackers (Fitbits)."

That's attacker to Fitbit to computer.

Posted on October 22, 2015 at 1:20 PM • 21 Comments

Comments

KollinOctober 22, 2015 6:44 PM

What I don’t understand is the company’s lack of response to earlier vulnerability reports in early 2014 and 2013 by researchers at two different universities and/or the company's lack of internal controls to capably discover and mitigate possible breaches:

From 2014: http://courses.csail.mit.edu/6.857/2014/files/17-cyrbritt-webbhorn-specter-dmiao-hacking-fitbit.pdf

“This report describes an analysis of the Fitbit Flex ecosystem. Our objectives are to describe (1) the data Fitbit collects from its users, (2) the data Fitbit provides to its users, and (3) methods of recovering data not made available to device owners.

Our analysis covers four distinct attack vectors. First, we analyze the security and privacy properties of the Fitbit device itself. Next, we observe the Bluetooth traffic sent between the Fitbit device and a smartphone or personal computer during synchronization. Third, we analyze the security of the Fitbit Android app. Finally, we study the security properties of the network traffic between the Fitbit smartphone or computer application and the Fitbit web service.

We provide evidence that Fitbit unnecessarily obtains information about nearby Flex devices under certain circumstances. We further show that Fitbit does not pro- vide device owners with all of the data collected. In fact, we find evidence of per-minute activity data that is sent to the Fitbit web service but not provided to the owner. We also discovered that MAC addresses on Fitbit devices are never changed, enabling user- correlation attacks. BTLE credentials are also exposed on the network during device pairing over TLS, which might be intercepted by MITM attacks. Finally, we demonstrate that actual user activity data is authenticated and not provided in plaintext on an end-to-end basis from the device to the Fitbit web service

From 2013:
https://gigaom.com/2013/04/24/keeping-fitbit-safe-from-hackers-and-cheaters-with-fitlock/

“The fusion of social networks and wearable sensors is becoming increasingly popular, with systems like Fitbit automating the process of reporting and sharing user fitness da ta. In this paper we show that while compelling, the careless integration of health data into social networks is fraught with privacy and security vulnerabilities. Case in point, by reverse engineering the communication protocol, storage details and operation codes, we identified several vulnerabilities in Fitbit (abstract link in attached article)

Spaceman SpiffOctober 22, 2015 7:22 PM

Why I don't buy such cruft... No security. No thought of security. You might as well put out a sign that says "Hack Me!"...

JREOctober 23, 2015 2:05 AM

The real problem is that the fitbit puts your personal data on some corporate server in the first place. Why do they do that? Your smartphone has enough flash to store the truly trivial amounts of data locally, and more than enough oompf to process it locally.

FatButtOctober 23, 2015 2:12 AM

FitBit... hmm... what about the marketing division try:

- HackKit
- SubMit
- BullShit
- PerMit
- BushTit
- TransMit
- UnFit (for sale)
- ToiletKit
- LoseIt
- TakeAHit
- ReverseSplit
- UnholyEmit
- EveryBit (and byte)
- WholeFuckingKit
- HomeBaseHit
- StasiBefit
- MessyShit
- AskingForIt
- BitByBit
- SecurityOmit; and
- FuckedOnTheFaceOfIt

Can the muppets producing peripherals and cellphones take security seriously, for one f&*king second please.

You are sitting on record amounts of corporate cash, so do something useful, instead of rewarding investors with share buybacks/record dividends.

You are driving the 5% who care about privacy/security back to the 1970s, because we simply don't trust, and won't buy, ANY OF YOUR SHIT.

ianfOctober 23, 2015 2:47 AM


@ JRE “the fitbit puts your personal data on some corporate server. Why do they do that?

Ah, well, that's the key of there being a FitBit in the first place: it's called leveraging the data to create a parallel revenue stream by selling amassed, nominally depersonalized (but in all probability "cookie'd") data collections of a given gender/ exercise length & frequency profile, to the highest bidder, most probably Google and Bing. Not possible without it going through a server to begin with. Then again, those who feel the need for ongoing quantified confirmation of their fitness come-what-may-privacy-be-damned, deserve being sold as prize fit chattel down the river.

rgaffOctober 23, 2015 3:03 AM

@ FatButt

"Can the muppets producing peripherals and cellphones take security seriously"

NO!!! ABSOLUTELY NOT!!!

Why? The reason is very simple: Adding security costs a lot of money, without a balancing gain in profit. 99 times out of 100 there is nothing more sinister going on than that simple fact at work. As long as this is true, NO company CAN AFFORD TO ADD SECURITY.... doing so means BANKRUPTCY!

How can this be changed? Well, here:

(1) Make it more expensive NOT to add security, for example:
- boycotting the devices (but beware, getting enough people to do it with you to make an impact is hard!)
- giving them a giant PR disaster more often over it so people associate a negative image or feeling with them (it works just like a negative advertisement)
- regulations to force it (this is kind of heavy-handed though, and not likely when our governments love insecurity in everything so they can all wipe out human rights more easily)
- somehow convert a few of the other 95% to care about security... (this is slow going, but possible)

2) Make adding security cheaper for them, for example:
- somehow security-thinking needs to be made more efficient, so it's easier to build in...
- it can't require one or two of a couple dozen super-pros in the world on your team to get it right, it needs to be more common and accessible.
- can there be new design methodologies invented and promulgated that incorporate the above?
- we really need to stop our governments interfering with security standards and practices, trying to make everything weaker!

Sorry dude, 5% just ain't enough of a market to care about, when you can make so much more catering to the other 95%... Unless it's your specialty to cater to the 5%, but are YOU PERSONALLY willing to pay a few thousand bucks for a little widget that you don't really need that everyone else gets for a few dollars or free? Yeah, that's how big the cost difference is for real security, that's the problem! But I do think there are improvements that can be made, just they're going to be a lot smaller than we wish... at least for quite a while from what I've seen so far. But hey, anything's better than nothing!

ianfOctober 23, 2015 3:55 AM


BTW. couldn't such a primed-for-exploitation FitBit be made to look as an ordinary wristwatch band overprinted with some Brand's logo, then "sprinkled out" as trade-fair- or conference swag, and subsequently made to respond to any previously infected WiFi router's surreptitious "wake up call?" If so, that would seem to me even a less detectable infection trigger path, than that used by Stuxnet to invade air-gaped Natanz through presumably several compromised USB pen drives somehow subverted and planted[sic!] in that enrichment plant engineers' pockets.

BTW. The BBC/ Open University "Cybercrimes" TV series reported that some unnamed US govt agency conducted an experiment of dropping infected USB thumb drives in governmental parking lots. A full 90% of these were then plugged in to computers at work. One would have thought that protecting state computers from that simplistic attack vector would be the easiest of all: only install computers with no, or plugged-in, USB ports. But I suppose that would have been too easy a solution, and, most probably, open up the window to complaints of govt corruption by "not buying from the lowest bidder."

[This reminds me: as a junior employee, I was to assist my boss with a presentation of our institution's competences to a client prospect in a multinational industrial concern. We brought a VHS cassette with a 10m demo produced at a great cost by a professional ad agency. Turned out, several years earlier the client decided that in order to stave off internal "video equipment wastage," they would solely use the SONY Betamax system. Subsequently there was not a single VHS machine in the entire sprawling complex. The boss, having only practiced his schtick in the context of just-shown demo video, then still a novelty, was red-faced speechless. I'd have solved it by dispatching "the junior" to a home electronics store in nearby mall to buy a VHS unit, we could use another machine back at home. But the boss has run out of ideas, could but mumble and dump the presentation on me. We didn't get the contract; you know who was to blame. ]

blakeOctober 23, 2015 4:31 AM

@ianf

> some unnamed US govt agency conducted an experiment of dropping infected USB thumb drives in governmental parking lots. A full 90% of these were then plugged in to computers at work

I'd be interested in knowing how many of those 90% knew not to open email attachments from unknown senders. I'm guessing most (depending on when the study was conducted), and they somehow didn't realise that plugging in a random USB was basically the same thing.

ianfOctober 23, 2015 6:04 AM


@ blake,

You know as well as I do that the only way to find that out would be to couple the offending USB-response IP addresses with primary users of those terminals (or all in the room?), and then send them innocuously looking mails with a control-infected attachment. An impossible task without a sizable "test dickheadedness of govt employees" research grant… NOBODY WOULD WANT TO SPONSOR THAT ;-))

I suspect that that percentage would be much lower, however, given that people have by now been exposed to the risk of rogue attachment attack vector. It has even gone overboard… people are scared of clicking on any attachments (which in itself may not be such a bad thing, as it tells the senders that they could say the same thing en plain), or even of opening mail from unrecognized people (=in itself a golden opportunity for social-engineered fake mags from pre-validated senders).

Then there are the unfortunate side effects that never harmed anybody, but are no longer viable. I used to modify the name field of my email for a running gag or commentary on a case by case basis (e.g. From: "You Don't Wanna Read This" <me@here.etc>). It is—and per RFC822 specifications should be—possible, though rather complicated in Gmail; it was much easier in Eudora. But I don't do this any more because mail forwarders check the entire originator field for SPAM, rather than just the address portion. Thus such my mails kept ending up in recipients' spam folders, and I was then unable to persuade them over the telephone that it was safe to move them to Inbox, and read them there (there were no attachments). Some were PARALYZED WITH FEAR, and, anyway, even if they done that instinctively every day, would not grok over-the-phone instructions on how to move a message from one folder to another. So there I stand, one more casualty of the Spam Wars left by the side of the Literary Road, while the Mail Caravan moves on STOP THAT METAPHOR DEPT.

Clive RobinsonOctober 23, 2015 9:33 AM

@ rgaff,

- it can't require one or two of a couple dozen super-pros in the world on your team to get it right, it needs to be more common and accessible. - can there be new design methodologies invented and promulgated that incorporate the above?.

Unfortunately it does need one or two super-pros... But they don't have to be on your team, just a specialised team.

I've mentioned this in the past as part of the Castle-v-Prison discussions on this blog in the past.

The first step is to realise that programmers come in all shapes, sizes and importantly abilities. Further that these abilities are usually, of necessity, focused in one very small area within the field of endeavour that programming covers.

The second step is to realise that this problem is far from new, in fact it's been around for so long that there are few alive who can remember back to when it was first identified and solutions were developed.

From a practical perspective the easiest to see this solution historically is not computer security but Operating Systems.

The whole reason for OS existence is to provide non system focused programmers with a stable abstraction view of the computer. The fact that it also abstracts away a whole load of other issues such as rudimentary computer security is just a big pile of cherries on top.

The third step is to realise that most of application development is at best idiotic due to long dead historic resource issues. It's been known for years that the higher level a programming language is the more productive a programmer is, but also the less mistakes they make or for that matter can make. That is bad as it is C is still a "Man's way to Program" when lisp and similar are way way more productive and error free.

The fact the "C way" is still seen as a way to go should be ringing alarm bells. Even back in the early days of *nix this was known to be bad, and prototype development was done using shell scripting, and small utility programs. ONLY if a shell script prototype was not fast enough on the then limited resources would a re-write in C be countenanced.

When you put these three steps together, it can be seen that a very high level strongly typed scripting language is the way to go not just for productivity but security as well. Application programmers would effectively script utilities into applications. Whilst the few super-pros, write the secure utilities the application programmers script.

Whilst this will not solve all security issues --nothing can-- it moves the bar up very significantly whilst also having the benefit of making application programmers more efficient.

As with the original OS abstraction, if done properly there are a whole load of other cherries that can go on top of this abstraction as well.

Now I know some people will howl about the loss of control / efficiency / compactness of code / speed of code / etc, but you have to take a step back and say objectively are these arguments really valid most of the time these days? The honest answer is "no". Thus as you can see there are ways the expertise of the very few super-pros can be effectively deployed across the whole application development industry.

Thus aside from heal dragging, the real arguments against this are entrenched "business models" which I like many others had hoped F/OSS would have resolved.

Some AnonOctober 23, 2015 11:20 AM

@Clive Robinson:

Writing secure code is hard in any language. The nature of C makes it even harder, I agree, but using something that's not C is no guarantee.

One, your abstraction layer can be full of holes. Look at Java. Strong typing, array bounds-checking, no mucking around with pointers, garbage-collected so no use-after-free problems, everything's contained within the JVM.... What could go wrong? Quite a lot, as it turns out.

Two, more importantly I think, whatever you write in, it's still up to the programmer to code defensively and think carefully about the logic they're implementing. We've known how to stop SQL injection for many years now and it still happens because coders forget to validate input. They're thinking about how to make their thing work, not how someone else might break it.

And the cynic in me wants to point out that programmer productivity doesn't necessarily promote security. Writing secure code means a lot of stop-and-think time, a lot of testing, and a lot of other stuff that is not implementing new features. Which is, of course, why we keep seeing insecure products being released all over. Like rgaff mentioned, since it lengthens your time-to-market and isn't something that most consumers know or care much about, it's something that dev teams everywhere will skip or gloss over if they possibly can.

This isn't a problem of people using the wrong operating systems or languages or tools. It's an incentives problem. I suspect the thing that's most likely to work is some kind of (heavy) civil liability for security breaches. Probably percentage-of-revenues or something, or else companies will just treat it as a cost of doing business. Knowing government though, it'll just dissolve into a meaningless box-ticking exercise, where the law mandates you do X, Y, and Z, and things stay just as insecure but everyone says "We're not responsible, we did X, Y, and Z!"

rgaffOctober 23, 2015 1:24 PM

@Clive Robinson

"I've mentioned this in the past as part of the Castle-v-Prison discussions"...

Not everyone has time to read everything, so thank you for taking the time to re-share some of it :)

So are you really saying that us more lowly security commoners need to wait for better properly securely-designed OSes to be more finished? http://www.genode.org looks promising at least until we get proper more secure hardware designs someday...

Also, if there's only a dozen real super pro security gurus in the world, we can't really expect them to all devote all their free time to secure FOSS fundamental building blocks for others to use... which is what it would take to see much out of them in that area in the next decade... This has to be made more accessible to a wider audience for us to see something more. Having so much dependent on so few is disaster.

@Some Anon

That's not the cynic in you, that's the realist in you. Programmers must produce, or the company goes under. There are deadlines that must be met at any cost... and the cost is always dumping things like basic testing and security.

Yes, making companies liable for the crap they produce would be helpful. But the most frustrating thing is when the government IS AGAINST fixing things, because they WANT THINGS TO BE MORE INSECURE so they can spy on us all better and destroy human rights more easily!!! What the heck! It's so backwards! And this is human nature: to oppress others, to promote self, the more power you get the more dictatorial you become. That's why any "just trust us" must always be met with "no, never, not in a million years!!" Be open or go home.

PQOctober 23, 2015 1:55 PM

@Some Anon, @rgaff, @Clive Robinson
Look to our colleges and universities when hunting for the source of the problem. Very few address security in the early programming courses, when habits are established. The teachers had their habits formed 10-30 years ago, and many still believe in the "C way" as a mechanism to teach fundamentals of computing while avoiding assembly language. As a product manager, I work with a bunch of engineers who regard SELinux as something that gets in the way of developers. They'll quit rather than build a product that includes SELinux. We have drawn-out arguments about minimizing attack surfaces by shutting down unused ports. Audit logging is an afterthought. When I model the system using the Microsoft threat modeling tool, they look at me as if I have two heads. We have managed to incorporate vulnerability scanning into our processes, but penetration testing is still an uphill battle. Yours truly scans the US-CERT bulletins for anything that could impact our products because it's just not something that the average developer has been conditioned to care about.

PeanutsOctober 23, 2015 6:53 PM

Fit bit doesn't have a chief information security officer. They probably don't have a code review process beyond getting the code to submit to their respective app stores. They probably don't pen test their iOS or android applications.They probably don't have a Web app firewall on their main site or web services. Their cyber threat program probably consists of antivirus and patching like 99% of all other corporations just a Mark waiting to be hacked or bought.

I think they have a superior device. More precise and accurate than Apple.
Better battery life than thier competitors.

But from an application design perspective they take the attitude that they own the data they hosted it on the cloud to make it available technical issues.

The club has absolutely no reason to access the users data. They don't understand the world in which they're deployed, if they did I'll be there would be encrypted, the encryption he would not be shared with the cloud, the wireless fingerprint every device would be identical, and like other mobile device providers they would randomize the Bluetooth address signatures on every sync.

They need to identify their web app and OWASP type exposures in parallel with hiring the CI SO and onboarding at least 4 teams of 3-7 security sme's. code auditors, pen testers, WAF specialists that can code, imbedded system arctect business analysists to identify critical security requirements.

They are fresh meat and thier c level execs need to resign, hire or prepare for jail time for effectively being too stupid to be free e.g criminal negligence and defrauding shareholders by not hiring experts with relevant experience that result in auditable evidence f reasonable due diligence.

God hel them look at who they are, http://www.fitbit.com/about big and clueless enough to not have anyone with a title which includes security

Corey NachreinerOctober 23, 2015 8:46 PM

RE: "This is impressive"... "That's attacker to Fitbit to computer."

Except that it is not proven possible AT ALL by this research. This story came out before the actual presentation. Here's what is true...

The researcher did find a way to for an attacker to inject 17-bytes of controllable data easily to a fitbit device, AND the fitbit will repeat that 17-bytes (via bluetooth) to devices (computers or mobiles) it connect to.

HOWEVER, she did not connect the dots and actually exploit this to get malware from the fitbit to a computer. In fact, I think it's highly unlikely that you could with this in the real world.

Let's explore. She controllers 17-bytes of data on this victim fitbit. What would she need to turn that into bad stuff happening on a computer (or mobile).

1) She needs a vulnerability that is specifically triggered by the parsing that 17-bytes of data she controls. So I suspect you'd need to find a very specific vulnerability in the Fitbit app on the computer or mobile, or you'd have to find a vulnerability somewhere in the blutooth stack's component. She did not find such a flaw as far as I can tell... so that 17byte message she has on a fitbit doesn't do anything yet
2) For the sake of argument, say she finds a vulnerability somewhere that's triggered by the specific 17bytes in the blutooth packet she controls. What needs to happen to turn that into something bad, like malware or a trojan, on the computer? A few things need to happen for a code execution flaw. First, a certain portion of your controllable attack space (17bytes) needs to trigger the flaw. Second, a certain portion of your controllable attack space needs to manipulate memory pointers and registers in a way that you get EIP control. Third, a certain portion of your controllable attack space needs to load shellcode...


17 bytes is not enough. The research pointed out an old 4-byte crash bug... but a crash bug is far away from code execution, or trojan download an installation. The smallest bin /sh shellcode I've seen is 34 bytes... and this would need more than that to deliver malware.

In short, this is good research... and we need to consider the attack vector where threats jump from one device to another... this is a reality between computers and mobiles... But in this case, the research did NOT actually get malware from a fitbit to a computer, and the PoC shown is no where near doing that, IMHO.

rgaffOctober 24, 2015 12:41 AM

@ PQ

Sounds like you should LET THEM QUIT and hire new people if you want a remotely secure product...

In my personal experience I haven't seen threats to quit over making something secure, but I've seen lots of people wanting and wishing for more security and better practices, but then deadlines and pressures take over... The bottom line always wins if you don't want to simply go bankrupt and everyone lose their jobs that way.

Rats AssOctober 26, 2015 2:01 PM

I think it would be great if someone would hack my FitBit and give me 1,000,000,000 steps. I wonder how many Vitality points I would get?

As for the delivering Malware to a laptop during the sync process, seems a little far reaching to me...

Also, why go through all the pain and trouble of this, when its easier to send a phishing email and social engineer them.

Clive RobinsonOctober 26, 2015 4:58 PM

@ Rats Ass,

Also, why go through all the pain and trouble of this, when its easier to send a phishing email and social engineer them.

The principle of low hanging fruit, has a side effect. That is as the lowest hanging fruit get eaten what was once not low hanging fruit becomes low hanging fruit...

Phishing Email will at some point cease to be as effective as it is currently supposed to be. However even before that time it's possible somebody will work out how to successfully do malware with these devices. If and when that becomes known, it may well be easier than other attack vectors so be the new low hanging fruit by default.

As Bruce has noted with fixed objects like algorithms in the past, attacks do not get harder with time...

For attacks to get harder, those being or likely to be attacked, have to actively make changes to block the attacks. To make the right changes you have to become aware of the classes of attack vectors you are going to block.

Silence DogoodDecember 31, 2015 10:41 AM

In the 90's TRW (now Experian) said that if they had your height and weight, you could live under an alias and they could find you based on your spending patterns. My concern is this data will next be used by your insurance company to increase your premiums. Before they increased them based on your credit score because they claimed accident rates were tied to credit history. I can only imagine how being out of shape will effect your rates. This data is collected by big companies for their profit, not your benefit. Consumers are simply a resource to be exploited. Data collection is the business these days and toys like fitbit exist only as an excuse to capture your data. In the best case you get pretty graphs of how many steps to take each day. In the worse case, all your personal data is used against you. Risk/Reward analysis anyone?

James BombardierFebruary 11, 2016 8:56 PM

I would just like to add that I am returning my Fitbit Surge after about a week. It took that amount of time to determine that something coming from Fitbit is corrupting my browser.....not my job to figure out what, but I sent them an email with return request.

So, I would add another avenue of entry is for the hackers to compromise Fitbit and then they can corrupt users at their leisure.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.