In April, Cybersecurity Ventures reported on extreme cybersecurity job shortage:
Global cybersecurity job vacancies grew by 350 percent, from one million openings in 2013 to 3.5 million in 2021, according to Cybersecurity Ventures. The number of unfilled jobs leveled off in 2022, and remains at 3.5 million in 2023, with more than 750,000 of those positions in the U.S. Industry efforts to source new talent and tackle burnout continues, but we predict that the disparity between demand and supply will remain through at least 2025.
The numbers never made sense to me, and Ben Rothke has dug in and explained the reality:
…there is not a shortage of security generalists, middle managers, and people who claim to be competent CISOs. Nor is there a shortage of thought leaders, advisors, or self-proclaimed cyber subject matter experts. What there is a shortage of are computer scientists, developers, engineers, and information security professionals who can code, understand technical security architecture, product security and application security specialists, analysts with threat hunting and incident response skills. And this is nothing that can be fixed by a newbie taking a six-month information security boot camp.
Most entry-level roles tend to be quite specific, focused on one part of the profession, and are not generalist roles. For example, hiring managers will want a network security engineer with knowledge of networks or an identity management analyst with experience in identity systems. They are not looking for someone interested in security.
In fact, security roles are often not considered entry-level at all. Hiring managers assume you have some other background, usually technical before you are ready for an entry-level security job. Without those specific skills, it is difficult for a candidate to break into the profession. Job seekers learn that entry-level often means at least two to three years of work experience in a related field.
That makes a lot more sense, and matches what I experience.
Posted on September 20, 2023 at 7:06 AM •
There are no reliable ways to distinguish text written by a human from text written by an large language model. OpenAI writes:
Do AI detectors work?
- In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
- Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
- To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
- When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
- There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.
- Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.
There is some good research in watermarking LLM-generated text, but the watermarks are not generally robust.
I don’t think the detectors are going to win this arms race.
Posted on September 19, 2023 at 7:08 AM •
Remember last November, when hackers broke into the network for LastPass—a password database—and stole password vaults with both encrypted and plaintext data for over 25 million users?
Well, they’re now using that data break into crypto wallets and drain them: $35 million and counting, all going into a single wallet.
That’s a really profitable hack. (It’s also bad opsec. The hackers need to move and launder all that money quickly.)
Look, I know that online password databases are more convenient. But they’re also risky. This is why my Password Safe is local only. (I know this sounds like a commercial, but Password Safe is not a commercial product.)
Posted on September 18, 2023 at 7:02 AM •
Two links on how to properly clean squid.
I learned a few years ago, in Spain, and got pretty good at it.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Posted on September 15, 2023 at 5:08 PM •
Claude (Anthropic’s LLM) was given this prompt: Please summarize the themes and arguments of Bruce Schneier’s book Beyond Fear. I’m particularly interested in a taxonomy of his ethical arguments—please expand on that. Then lay out the most salient criticisms of the book.
Here’s a brief summary of the key themes and arguments made in security expert Bruce Schneier’s book Beyond Fear: Thinking Sensibly About Security in an Uncertain World:
- Effective security requires realistic risk assessment, not fear-driven responses. We need to rationally determine actual threats versus perceived ones.
- Security trade-offs need to be carefully balanced against costs to other values like privacy, freedom, and social/economic impacts.
- Human psychology and biases lead to exaggerated fears that politicians/marketers can exploit, resulting in wasteful and theatrical “security theater.”
- Security is a process that requires ongoing flexibility, not total prevention. We can’t anticipate every threat but can build resilience.
Analyze risks empirically using evidence, statistics, and probability rather than intuition. Avoid underestimating everyday risks while overestimating novel or sensational ones.
- Reject the temptation to implement oppressive security restrictions after tragic but unlikely events. Focus only on proportional responses.
- Where possible, favor openness and transparency over aggressive data collection or restrictions which erode civil liberties.
- Seek security policies and technologies that enhance society broadly rather than benefit special interests.
Schneier argues for rational, ethical approaches to security aimed at genuine protection while avoiding violations of human rights in the name of safety.
Here are some of the key ethical arguments Bruce Schneier makes in Beyond Fear regarding security policy and practices:
- Utilitarian Ethics – Security measures should aim to provide the greatest societal benefit and well-being for the most people, not just special interests.
- Individual Autonomy – Restrictions should be minimized to avoid excessive infringements on civil liberties and individual freedoms. Surveillance creep risks violating autonomy.
- Privacy Rights – Pervasive monitoring and data collection erode privacy rights and dignity. Citizens are entitled to some degree of opacity.
- Proportional Responses – Reactive policies should weigh costs versus statistically likely benefits. Draconian rules beyond proportionality are unethical.
- Justice and Fairness – Security should not create racial, religious or other profiling biases that punish groups rather than conduct.
- Accountability and Transparency – Ethical security requires checks against excessive power and oversight for responsibility. Secrecy enables abuses.
- Honest Representation – Officials and pundits have an ethical duty not to stoke disproportionate public fears. Security theater manufactures consent.
- Integrity and Character – Leaders should have the wisdom and courage to make difficult trade-offs rather than exploit fears for authority and control.
Overall Schneier argues security policies grounded in ethics rather than fear promote human well-being while recognizing inherent trade-offs. Prudence, transparency, and proportionality are essential.
Here are some of the most salient criticisms made against the arguments put forth in Bruce Schneier’s book Beyond Fear:
- Downplays real threats – Some argue Schneier dangerously downplays serious risks like terrorism and cybercrime by calling fears overblown.
- Too theoretical – His risk analysis approach may rely too much on statistics versus real-world uncertainties that defy probabilistic reasoning.
- No easy answers – While citing misguided policies, critics say Schneier provides few actionable solutions or concrete policy guidance.
- Conflicting values – His balancing of security, privacy and liberty involves inherent trade-offs and tough choices he gives little direction on.
- Politically naive – Suggesting special interests drive policy may under-appreciate legitimate national security concerns of elected officials.
- Hindsight bias – Assessing past overreactions easier than predicting future threats that justify caution today.
- No accounting for psychology – Factoring human irrationality into policy is essential, not dismissed as “security theater.”
- Technical focus – Focuses narrowly on certain technologies while ignoring broader organizational and sociological dimensions.
In summary, critics argue Schneier’s risk analysis orientation is too clinical, abstract, and devoid of political and human realities to transform concrete policy in the ways he intends. His framework may therefore lack sufficient nuance or guidance.
It’s interesting. The summary is pretty accurate, and so are the criticisms. The book was a product of my thinking twenty years ago, and my own thinking has evolved to address those criticisms.
Of course, this only works with older books that the LLM has ingested, and probably works better with books that have been written about.
Posted on September 15, 2023 at 3:12 PM •
Interesting article on technologies that will automatically identify people:
With technology like that on Mr. Leyvand’s head, Facebook could prevent users from ever forgetting a colleague’s name, give a reminder at a cocktail party that an acquaintance had kids to ask about or help find someone at a crowded conference. However, six years later, the company now known as Meta has not released a version of that product and Mr. Leyvand has departed for Apple to work on its Vision Pro augmented reality glasses.
The technology is here. Maybe the implementation is still dorky, but that will change. The social implications will be enormous.
Posted on September 15, 2023 at 7:15 AM •
This is a current list of where and when I am scheduled to speak:
- I’m speaking at swampUP 2023 in San Jose, California, on September 13, 2023 at 11:35 AM PT.
The list is maintained on this page.
Posted on September 14, 2023 at 12:01 PM •
Google removed fake Signal and Telegram apps from its Play store.
An app with the name Signal Plus Messenger was available on Play for nine months and had been downloaded from Play roughly 100 times before Google took it down last April after being tipped off by security firm ESET. It was also available in the Samsung app store and on signalplus[.]org, a dedicated website mimicking the official Signal.org. An app calling itself FlyGram, meanwhile, was created by the same threat actor and was available through the same three channels. Google removed it from Play in 2021. Both apps remain available in the Samsung store.
Both apps were built on open source code available from Signal and Telegram. Interwoven into that code was an espionage tool tracked as BadBazaar. The Trojan has been linked to a China-aligned hacking group tracked as GREF. BadBazaar has been used previously to target Uyghurs and other Turkic ethnic minorities. The FlyGram malware was also shared in a Uyghur Telegram group, further aligning it to previous targeting by the BadBazaar malware family.
Signal Plus could monitor sent and received messages and contacts if people connected their infected device to their legitimate Signal number, as is normal when someone first installs Signal on their device. Doing so caused the malicious app to send a host of private information to the attacker, including the device IMEI number, phone number, MAC address, operator details, location data, Wi-Fi information, emails for Google accounts, contact list, and a PIN used to transfer texts in the event one was set up by the user.
This kind of thing is really scary.
Posted on September 14, 2023 at 7:05 AM •
Make sure you update your iPhones:
Citizen Lab says two zero-days fixed by Apple today in emergency security updates were actively abused as part of a zero-click exploit chain (dubbed BLASTPASS) to deploy NSO Group’s Pegasus commercial spyware onto fully patched iPhones.
The two bugs, tracked as CVE-2023-41064 and CVE-2023-41061, allowed the attackers to infect a fully-patched iPhone running iOS 16.6 and belonging to a Washington DC-based civil society organization via PassKit attachments containing malicious images.
“We refer to the exploit chain as BLASTPASS. The exploit chain was capable of compromising iPhones running the latest version of iOS (16.6) without any interaction from the victim,” Citizen Lab said.
“The exploit involved PassKit attachments containing malicious images sent from an attacker iMessage account to the victim.”
Posted on September 13, 2023 at 7:13 AM •
A new Mozilla Foundation report concludes that cars, all of them, have terrible data privacy.
All 25 car brands we researched earned our *Privacy Not Included warning label—making cars the official worst category of products for privacy that we have ever reviewed.
There’s a lot of details in the report. They’re all bad.
Posted on September 12, 2023 at 7:20 AM •
Sidebar photo of Bruce Schneier by Joe MacInnis.