Data & Society just published a report entitled “Workplace Monitoring & Surveillance“:
This explainer highlights four broad trends in employee monitoring and surveillance technologies:
- Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.
- Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers’ ability to opt out of tracking.
- Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.
- Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated “nudges” or adjusting performance benchmarks based on a worker’s real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.
In a blog post about this report, Cory Doctorow mentioned “the adoption curve for oppressive technology, which goes, ‘refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'” I don’t agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.
Posted on March 12, 2019 at 6:38 AM •
One attraction of a vein based system over, say, a more traditional fingerprint system is that it may be typically harder for an attacker to learn how a user’s veins are positioned under their skin, rather than lifting a fingerprint from a held object or high quality photograph, for example.
But with that said, Krissler and Albrecht first took photos of their vein patterns. They used a converted SLR camera with the infrared filter removed; this allowed them to see the pattern of the veins under the skin.
“It’s enough to take photos from a distance of five meters, and it might work to go to a press conference and take photos of them,” Krissler explained. In all, the pair took over 2,500 pictures to over 30 days to perfect the process and find an image that worked.
They then used that image to make a wax model of their hands which included the vein detail.
Posted on January 11, 2019 at 6:38 AM •
Researchers are able to create fake fingerprints that result in a 20% false-positive rate.
The problem is that these sensors obtain only partial images of users’ fingerprints — at the points where they make contact with the scanner. The paper noted that since partial prints are not as distinctive as complete prints, the chances of one partial print getting matched with another is high.
The artificially generated prints, dubbed DeepMasterPrints by the researchers, capitalize on the aforementioned vulnerability to accurately imitate one in five fingerprints in a database. The database was originally supposed to have only an error rate of one in a thousand.
Another vulnerability exploited by the researchers was the high prevalence of some natural fingerprint features such as loops and whorls, compared to others. With this understanding, the team generated some prints that contain several of these common features. They found that these artificial prints were more likely to match with other prints than would be normally possible.
If this result is robust — and I assume it will be improved upon over the coming years — it will make the current generation of fingerprint readers obsolete as secure biometrics. It also opens a new chapter in the arms race between biometric authentication systems and fake biometrics that can fool them.
More interestingly, I wonder if similar techniques can be brought to bear against other biometrics are well.
Posted on November 23, 2018 at 6:11 AM •
Troy Hunt has a good essay about why passwords are here to stay, despite all their security problems:
This is why passwords aren’t going anywhere in the foreseeable future and why [insert thing here] isn’t going to kill them. No amount of focusing on how bad passwords are or how many accounts have been breached or what it costs when people can’t access their accounts is going to change that. Nor will the technical prowess of [insert thing here] change the discussion because it simply can’t compete with passwords on that one metric organisations are so focused on: usability. Sure, there’ll be edge cases and certainly there remain scenarios where higher-friction can be justified due to either the nature of the asset being protected or the demographic of the audience, but you’re not about to see your everyday e-commerce, social media or even banking sites changing en mass.
He rightly points out that biometric authentication systems — like Apple’s Face ID and fingerprint authentication — augment passwords rather than replace them. And I want to add that good two-factor systems, like Duo, also augment passwords rather than replace them.
Hacker News thread.
Posted on November 5, 2018 at 10:24 AM •
At least right now, facial recognition algorithms don’t work with Juggalo makeup.
Posted on July 5, 2018 at 7:14 AM •
This acoustic technology identifies individuals by their ear shapes. No information about either false positives or false negatives.
Posted on April 23, 2018 at 7:48 AM •
It’s routine for US police to unlock iPhones with the fingerprints of dead people. It seems only to work with recently dead people.
Posted on March 30, 2018 at 6:11 AM •
Yet another development in the arms race between facial recognition systems and facial-recognition-system foolers.
Posted on March 27, 2018 at 9:35 AM •
It only took a week:
On Friday, Vietnamese security firm Bkav released a blog post and video showing that — by all appearances — they’d cracked FaceID with a composite mask of 3-D-printed plastic, silicone, makeup, and simple paper cutouts, which in combination tricked an iPhone X into unlocking.
The article points out that the hack hasn’t been independently confirmed, but I have no doubt it’s true.
I don’t think this is cause for alarm, though. Authentication will always be a trade-off between security and convenience. FaceID is another biometric option, and a good one. I wouldn’t be less likely to use it because of this.
FAQ from the researchers.
Posted on November 15, 2017 at 6:54 AM •
Embedded in this story about infidelity and a mid-flight altercation, there’s an interesting security tidbit:
The woman had unlocked her husband’s phone using his thumb impression when he was sleeping…
Posted on November 9, 2017 at 2:45 PM •
Sidebar photo of Bruce Schneier by Joe MacInnis.