Observing Thinking

Observing Thinking
Observing Thinking

Tuesday, May 10, 2016

INDIVIDUAL vs SOCIETAL Security: Deja Vu all over Again

You may have thought that the lawsuit by the FBI against Apple was amicablly settled out of court on March 21 but if you thought that you would be wrong. For those of us with less than adequate memories, here is a synopsis of the events leading up to lawsuit.

The FBI wanted to examine the Apple iPhone taken from the San Bernardino terrorist shooter in the hope it would reveal connections to other terrorists. Unfortunately the FBI bungled the attempt to hack into the phone which prevented anyone from logging into it let alone get information from it --- except perhaps the Apple Corporation who made the phone and theoretically could restore it to its prior state so the FBI could get on with its investigation.
Now comes the tricky technical part. In order to restore the phone, Apple software engineers must create a Trojan Horse virus which appears to be a valid update to the phone’s Operating System but in fact will launch an attack on it, disabling the code that is blocking the FBI and letting them have another crack at getting to its contact list. This process is called “white-hat” hacking (as opposed to “black-hat” which is what the baddies use).

All the FBI says that it wants is the phone back in the same condition it was before the shooter was killed and his phone captured. Then they can get back to work protecting the Homeland. “Not so fast!”, replies Apple CEO Tim Cook. If we do that and the code leaks out then everyone who owns an iPhone (an estimated 64 million people in the US alone as of 2014) will be at risk from malicious hackers --- not only in the US but in other nations) and tt will be a HUGE invasion of privacy and the world will never be the same. So the FBI responds, “if you won’t do what we want you to do voluntarily, we will have the courts issue an injunction forcing you to do so.” And they did and the blogosphere exploded with claims and counterclaims about what was the Right Thing to Do.

Privacy advocates claimed that this was like the government getting a search warrant to enter a home only to encounter a locked safe for which it had no permission to open so it asks the manufacturer of the safe to provide it with a master key and the manufacturer responds that it has no assurance that this master will be safe and not copied thus violating the security of their product and consequently their business trade will suffer.

A CBS poll of the US general public revealed that 50% of the respondents supported the FBI's position, and 45% supported Apple's.


The case never made it to court as the FBI blinked and ostensibly found a white-hat hacker to dig out the information they wanted to examine. (According to the Wall Street Journal, “FBI Paid More Than $1 Million to Hack San Bernardino iPhone”)

So, who’s right? I’ll tell you. I don’t know.

What I do know is that this is not the end but the beginning of the tortuous process of sussing out several thorny issues. One issue is that we have consciously designed a system of jurisprudence that meant to be slow and deliberate so that we sacrifice speed for for accuracy --- for getting it right. However, technology turns that philosophy on its head; we not only want our gadgets to run fast but we want them be created fast as well --- if it’s not right, we’ll make it right in the next version.

Also in our law system, precedents are important and you can be sure that law enforcement agencies will continue to press this issue because, for them, security will usually trump privacy

And finally, beyond smartphones, is there a reasonable expectation of privacy on Internet or not? When I post to Facebook, certainly not; when I send email, I certainly do expect privacy. T
his is not yet settled law and there is an ongoing conversation on this issue (on the Internet of course --- search on the term, “reasonable expectation of privacy”)

Pros and Cons of Autonomous Cars: Part 3

We’ve been investigating some of the Pros and Cons of driverless or autonomous cars. So far, we’ve looked at them in terms of Safety/Security, Time, and Money. We have also looked at comparisons between “robot cars” and other autonomous vehicles such as busses and trains. In this column we examine perhaps the thorniest of issues: Ethics.

While you can quantitatively measure Time, Money and Security, most ethical theories can only be evaluated qualitatively. Utilitarianism (Simple definition: Does the outcome of an action insure the greatest good for the greatest number of people?) attempts to overcome this problem by weighing the costs against the benefits and focuses on the consequences of an action regardless of the intention. At the other end of the spectrum is Deontology or Rule-based ethics where an act is judged right or wrong according to its adherence to a set of rules --- the ten commandment for example. If we use Deontology as our ethical guide then we would focus more on Intention rather than outcome. So,if your intention is good the act is good no matter the outcome.

A nice exercise in applying these two ethical theories is the “Trolley Problem”.( “Trolley” is a British word that translates in US english to “Train”) which is at:


Briefly, here is the scenario: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice? A 2009 survey shows that 68% of professional philosophers would pull the switch (sacrifice the one individual to save five lives), 8% would not, and the remaining 24% had another view or could not answer”. (A surprising result as I would have expected a much higher percentage of philosophers responding that they could not, would not answer.)

For the Utilitarian, this is a no-brainer. One choice results in five dead, the other only One. Pull the lever and don’t look back. From the Deontological point of view it would depend on the set of rules you have chosen to follow; if it includes “Thou shalt not kill” then either action is wrong; the best one can do is modify the rule to “Thou shalt not kill, but if thou must, kill as few as possible” which puts one on the slippery slope to Utilitarianism. Some problems have no solution. Sigh.

For our purposes, imagine a driverless car in a situation where: a child darts out in front of the car, it’s too late for the ai car to stop but if it swerves you wipe out five pedestrians. Do we let the autonomous car make the call knowing full well that this decision is embedded in its software and ultimately that software was written by a team of programmers who, after all are only human? Some say these cars should have a human overide (like the emergency stop cord on a train); others say to trust the software --- it’s been tested (when was the last time your were tested?)) and can react much faster than you.

And what about letting the car break the rules when necessary? I usually give bikers a wide berth, even crossing the double line when it’s safe. Would a robot car be programmed to do the same?

All of this raises the question of responsibility. It’s your car, either owned or leased, and you better have insurance that covers situations like this. But what about the car manufacturer, and what about the programmers who wrote the software --- are they also liable?

If the thought of driverless cars scares you, consider the possibility of autonomous armed drones where their goal is not safety but destruction. For better or for worse, science fiction is rapidly becoming fact.

Search This Blog