Videos and Links from the Public-Interest Technology Track at the RSA Conference

Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).

I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:

  • How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)

  • Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)

  • Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)

  • Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)

  • Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)

  • The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)

I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress's Travis Moore, Ford Foundation's Jenny Toomey, CitizenLab's John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU's Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.

Here is my blog post about the event. Here's Ford Foundation's blog post on why they helped me organize the event.

We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)

Related: Here's my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.

And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.

If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I'd like to do it again next year.

Posted on March 8, 2019 at 2:24 PM • 7 Comments

Comments

Mike GerwitzMarch 8, 2019 7:38 PM

Bruce:

At https://public-interest-tech.com, you state:

I think of public-interest technologists as people who combine their technological expertise with a public-interest focus, either by working on tech policy, working on a tech project with a public benefit, or working as a more traditional technologist for an organization with a public-interest focus.

When I think of technology working in the interest of the public, the first issue that comes to mind to me is that of software freedom---software that works in the interests of the users rather than the authors of the software. Many issues today related to user privacy and security are precisely because users have been put into a position where they have forefitted control over their computing. And while software freedom isn't a panacea in itself (there can be malicious free/libre software), many including myself do consider it to be a prerequisite.

Given that, was the free software movement---which began in 1983---not one of the earlier examples of a public interest organization as you describe it? How about listing the Free Software Foundation there? The GNU Project also embodies these ideals.

WisdomMarch 8, 2019 9:04 PM

To me a public interest technologist sounds like someone not smart enough to to be a hacker and yet smart enough to know a cushy job when they see one. The type of person who has figured out how to sound intelligent without actually being intelligent. Something to soak up all the people with MAs in computer science.

I love @bruce because he is good hearted and smart but i dont know that one can build a field around unicorns.

Bruce SchneierMarch 9, 2019 12:14 PM

@wisdom:

"i dont know that one can build a field around unicorns."

We can't. We need to scale this. We need to scale this a lot.

That's what we're all trying to do.

Anil JohnMarch 9, 2019 3:22 PM

If you plan to hold these types of sessions in the future, and I hope you do, as I fully agree with the need for more technologists to work in government for the public good, do make a point of inviting technologists who currently work in government as a conscious career choice rather than those who did that in the past or as a short term 'tour of duty' to share their perspectives on both the opportunities and the challenges that exist in their work.

As someone who is a current civil servant and who was in the audience at RSA for some of these sessions and watched some others online, it sometimes felt as though many (but not all) folks on the panels, while well meaning, were talking at government rather than engaging in a dialog with folks with diverse perspectives to move this area forward i.e "I am from the (not government) and I am here to help you!"

1&1~=UmmMarch 9, 2019 3:28 PM

@Bruce:

The 'going dark' issue is a bit more than 25years old. I've done a bit of digging on the subject and I'm sure there is more to yet find.

For a while I've known that FBI Director Louis Freeh 'went on a world tour' with it, because it was felt that the US people would not accept the wanted changes. So the idea was to persuade other countries to start down the path and use them to step by step ratchet things up to the desired level such that a tipping point was reached where 'the US is falling behind' type arguments could be used to overcome objections. Whilst it's unclear as to exactly when 'Going Dark' as an advertising slogan style catch phrase was thought up, it's becoming clear that the idea was developing under Freeh's predecessor William Sessions, more famous for his 'Winners don't use drugs' slogan which started under Nancy Reagan. Apparently it was Bill Sessions who thought the idea would not fly unless a tipping point was first reached. He thought that this could be done within the normal ten year tenure of an FBI director. However various strongly Republican supporters who were in various US department's echoed the feeling of George Bush Senior that Bill Sessions was not partisan enough and various 'fiddling the expenses' accusations were made at various points. However when Bill Clinton came into office fresh accusations were made and Bill told William to resign. William refused and was thus sacked by Bill Clinton. The person who had done quite a lot to undermine William Sessions was Deputy Director Floyd Clarke and William said as much about him sufficiently often that any future career Clark had in public service was killed off.

With regards the technology world and the policy world as you observe neither is the real world. However it appears that the FBI aim is to build a third world that is most favourable to them. Then having obtained draconian legislation in their definition of the world spread it out to not just influance the policy world drastically but directly effect the real world as globally as possible.

The UK, Australian and other legislation you mention is just the sort of thing the FBI was looking for under Sessions and Freeh to act as a tipping point into making the US public compliant. Which is another reason why the European GDPR is so important as this will enable amongst others the US state legislatures to put on the books privacy legislation that will act as a counter balance to what the FBI and other Federal Agencies are upto.

You did however miss the point that technology is not just agnostic but the effect complexity has on it's ability to perform any function. Put simply it is the 'Directing Mind' that puts technology to use in any function and it is an individuals Point of View that decides if that function is good or bad. Unfortunatly the majority POV is not necessarily the correct one due to what can best be described as propaganda. Which as you noted lobbyists are fairly adept at inventing for any given POV they get paid for.

But importantly is that for any given function that technology might be put to the level of the technologies complexity actively limits the functional level even though the technology may have broad scope. You could call it the 'easy to use factor' where non technologists can not use a technology for the functions they want to put it to unless the technology is sufficiently complex that it's abilities can be made of use to non technologists. It's one of the reasons that AI for instance is particularly worrying, whilst it can carry out functions better than humans can it has no understanding in human terms so lacks morals and ethics. Thus adding an AI front end as well as back end to a technology makes it usable to those with no morals or ethics of their own, whete as before they would have had to find technologists who also had not just a lack of morals or ethics but no qualms about doing so or carring about future times when they are nolonger protected by those who gave them direction.

As for setting up a profession of public interest technologists, I've always had more than a few qualms about such 'closed shops' because even with the best will in the world history shows us they become hierarchical with those at the top selecting those under them thus setting up patronage and other significant failings, not least of which is becoming conservative in outlook and thus stifling progress and importantly innovation.

GeorgeMarch 10, 2019 3:00 AM

1&1~=Umm wrote,

"Thus adding an AI front end as well as back end to a technology makes it usable to those with no morals or ethics of their own, whete as before they would have had to find technologists who also had not just a lack of morals or ethics but no qualms about doing so or carring about future times when they are nolonger protected by those who gave them direction."

If I think from the designer's perspective, Technology and Systems are not mutually inclusive in scope, nor are they mutually exclusive. A successful system extends beyond technology to encompass people that use technology. This certainly goes beyond what is known as human interface.

A common example of this extension is the user manual. Systems with higher complexity often includes standardized procedures that must be learned and perhaps additional professional certificates required of users of such technological product who turn it into a profession. This is common in all aspects of "technological systems."

What is commonly overlooked in discussions of morals and ethics is the fact that the moral and ethical beliefs are themselves systematic, rules that are designed and trained. Much like a computer system's set of logic, except the system's architect tailored the rules for society behavior.

Morals and ethics are like vast networks of computer systems that can constantly evolved due to environmental behaivors and sometimes more dogmatically thru update patches, IMHO.

Carlton DustonMarch 13, 2019 11:28 PM

Mr Schneier,

I watched some of your videos from this RSA event and subsequently downloaded a copy of C.P. Snow's Rede lecture.

This lecture is one of the most interesting and intellectually stimulating documents I have had the pleasure to ingest so far this year. It certainly provides some wonderful mental frameworks.

So thank you for that.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Sidebar photo of Bruce Schneier by Joe MacInnis.