Observing Thinking

Observing Thinking
Observing Thinking

Thursday, December 15, 2011

December 15, 2011 Social Media: Part One

Last month I attempted to rebut the view that texting was morally objectionable. This time I want to present a more balanced analysis and discuss the issue more broadly. The broader issue subsumes texting and is generally referred to as Social Networks or Social Media. There is much to say about the effects of social media on society and vice-versa. In fact, so much that it will not fit into one column. So, rather than omit every other word, I’ve decided to deliver this column in two installments.

What exactly is meant by the term “social media”? We might already know that media is the plural of medium and that we are probably talking about media like print (e.g. books, magazines and newspapers), tv , radio and the Internet. Further, the media must somehow facilitate social interaction.

I find that classifying social media into the two categories: “One-Way” vs “Two-Way” communication is a useful way to understand social media. For example, print, tv and radio are One-Way communication media because their content is broadcast to us from a single source and we usually cannot interact with it (except very slowly as for Speakout and the Letters to the Editor).

On the other hand, Two-Way communication allows us to respond via the medium to its content which may be represented by text, sound, or images. Examples include telephone and the Internet and within the medium of the Internet we have email, blogs and chat. Two-Way allows many-to-many connections while One-Way allows only one-to-many connections and is perceived as less useful in the world of social media.

“What about Facebook, Twitter, LinkedIn and Google+ ?” --- you may well ask. All of these are quintessential social networking sites that are a mashup of email, blogs, and chat(not to mention video games and advertisements) that allow many-to-many communication. However, as my granddaughter has pointed out to me, “Twitter is like Facebook except you update your status every few minutes rather than every few days…” She uses Facebook regularly and Twitter not at all. She is also thirteen going on twenty-five so if she is representative of the upcoming generation it would be wise to see the future through her eyes.

Blogs I would rate as 1.5-Way and Letters to the Editor as 1.25-Way because the factors which determine a medium’s place between 1 and 2-way communication are the quantity and the speed of information.

None of the aforementioned media exist to make a society neither flourish or even to entertain. These may be the side effects but the main goal is simply to make money for their owners or stockholders. If they cannot do that one thing, they perish.

To make matters even more complicated, “media” has a different, more specific meaning in the world of computers where it refers to memory hardware for data storage and retrieval. Thus, computer media examples would be hard drives, memory sticks, CDs and DVDs.”

On a much deeper and philosophical level, Marshall McLuhan back in the sixties proclaimed that “The Medium is the Message” and although today we tend to call the “message” the “content”, the gist of what he meant was that the medium has a greater effect on society than the message itself. Again, from Wikipedia:

“McLuhan understood "medium" in a broad sense. He identified the light bulb as a clear demonstration of the concept of “the medium is the message”. A light bulb does not have content in the way that a newspaper has articles or a television has programs, yet it is a medium that has a social effect; that is, a light bulb enables people to create spaces during nighttime that would otherwise be enveloped by darkness. He describes the light bulb as a medium without any content. McLuhan states that "a light bulb creates an environment by its mere presence."

And, from another source: (http://individual.utoronto.ca/markfederman/article_mediumisthemessage.htm)

“McLuhan defines medium for us as well. Right at the beginning of Understanding Media, he tells us that a medium is "any extension of ourselves." Classically, he suggests that a hammer extends our arm and that the wheel extends our legs and feet. Each enables us to do more than our bodies could do on their own. Similarly, the medium of language extends our thoughts from within our mind out to others. “

In the next column we’ll take a look at the pros and cons of social media.

Sunday, November 13, 2011

November 13, 2011 Is Texting Evil?

In the Parade magazine that came within your Oct 9 Press Republican was an interesting article “Generation Wired” by Emily Listfield in which she examines how children are being affected by wired (and wireless) technology such as smartphones, music players, video games, the Internet and the various permutations and combinations of all of them.

To quote the cover, “Being connected 24/7 is changing how our kids live.  And it may even be altering their brains.”  This raises the  important question: is it altering our brains in a positive or negative way or both? In his article, “The Waking Dream” by Kevin Kelley,  Editor-at –Large of Wired magazine, he writes, “We already know that our use of technology changes how our brains work. Reading and writing are cognitive tools that change the way in which the brain processes information. When psychologists use neuroimaging technology such as MRI  to compare the brains of literates and illiterates working on a task, they find many differences --- and not just when the subjects are reading. … If alphabetic literacy can change how we think, imagine how Internet literacy and ten hours a day in front of one kind of screen or another is changing our brains.” Kelley goes on to make the case that, overall the Internet is a good thing --- in some cases it may be a terrible waste of time, but like dreams, perhaps it might be a “productive waste of time.”!

Regarding texting, Listfield quotes Dr Sherry Turkle, director of  MIT’s Initiative on Technology and Self and author of “Alone Together”  as writing, “Kids have told me that they almost don’t know what they are feeling until they put in a text”.  Listfield uses this example as a negative  cost of technology but I can envision it as a plus, as a benefit and not necessarily a cost. Let me explain:

When I was working as a Professor of  Computer Science, much of my research was on developing teaching  strategies to facilitate development of problem-solving skills in students, particularly in freshman as that would have the maximum benefit for the student over his and her college stay. I decided to use  a combination of computer programming (in a simplified language called Logo) and writing exercises as my pedagogy.  The main idea was to combine the discipline of writing with the discipline of computer programming which uses both a priori  and  a posteriori reasoning to solve problems. As explained by Wikipedia (http://en.wikipedia.org/wiki/A_priori_and_a_posteriori)
'”’a priori knowledge’ is known independently of experience (conceptual knowledge), and 'a posteriori knowledge' is proven through experience .”
In a nutshell, a priori reasoning is “armchair reasoning” (you don’t have to get up and go door-to-door  to ascertain  the number of  married bachelors in the greater metropolitan Plattsburgh area)  while a posteriori is experimental reasoning, the basis of all science so you can’t just solve the problem in your head.

But just as important as logical, scientific thinking for problem solvers is writing.  Writing is arguably the best method  for clarifying our thoughts. When I have a hard problem that I am struggling with I find that I can begin to make progress by sitting down and “writing myself a letter” describing first how I feel (usually frustrated) and then what the problem seems to be and finally offering myself suggestions for its solution. It’s a wonderful pump priming process. By first slowing down enough to really pay attention and examine the problem, I gain clarity. And as I gain clarity, I also gain confidence, perhaps the single most important behavior of good problem solvers. When I look down the hill on my skis and say to myself, “Oh No, I’m going to fall” I most certainly will. But if  I say, “This should be fun” I have a good chance to make it the bottom in an upright position.

This is why I think that texting can be a  positive outcome of  technology. So long as the kids are texting (and not while they’re driving), they are writing and writing clarifies thinking. And (in addition to “Love, sweet Love”) clear thinking is what the world needs right now.

Sunday, October 9, 2011

October 9, 2011 The Constitution and Technology

In (belated) honor of Constitution Day I would like to examine how advances in Technology have affected the interpretation of the Constitution in terms of the court cases that have made it to the Supreme Court. I will be using two examples cited by  Laurence Tribe, Tyler Professor of Constitutional Law, Harvard Law School in his 1991 paper: 
"The Constitution in Cyberspace: Law and Liberty Beyond the Electronic Frontier" (http://epic.org/free_speech/tribe.html) Although the paper is 20 years old and some of his examples of technology will seem dated, his reasoning and insights are not. 
Tribe uses two Supreme Court cases which center on the Fourth and Sixth Amendments to make the case that we are inconsistent in our consideration of technological effects on our values(freedom, truth, justice etc). 
The Fourth Amendment states: “'The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
The Sixth Amendment contains the  “Confrontation Clause” which states:  “In all criminal prosecutions, the accused shall enjoy the right … to be confronted with the witnesses against him”.

 Tribe discusses “Maryland V. Craig” where the Supreme Court upheld the power of the state to try an alleged child abuser with the accuser not in the courtroom but by means of a one-way closed-circuit TV  to spare the child trauma. The decision to allow a new technology that was unknown to the framers of the Constitution was based on a cost-benefit analysis taking into account the three stakeholders: the accuser, the defendant and society at large. But what is the intent of the Confrontation clause? Is it to make the accuser more likely to tell the truth when confronted face-to-face with the accused? Or is it for identification purposes only? Tribe agrees with dissenting Justices that the accused rights were abused and that the introduction of  new information technology had the effect of withholding the protections of the Bill of Rights .

The Fourth amendment case Tribe presents is “Olmstead V. US” . Without a warrant, federal agents wiretapped Roy Olmstead’s phone to gather evidence against him for bootlegging during Prohibition. Although wiretapping was illegal under state law (Washington), the evidence was allowed and Olmstead convicted. The Supreme Court ruled that no “search and seizure” occurred because the Fourth Amendment “itself shows that search is to be of material things – the person, the house, his papers or his effects” and thus “there was no searching” when a suspect’s phone was tapped because the Constitution’s language “cannot be extended and expanded to include telephone wires reaching to the whole world from the defendant’s house or office.”  Judge Brandeis, in a dissenting opinion, argued that the Fourth Amendment should extend to electronic communications in it protection. He pointed out that when a phone is tapped, the control of personal information at both ends of the connection are compromised --- not just the suspect’s privacy. In this fashion, “the tapping of one man’s telephone  line involves the tapping of the telephone of every other person whom he may call, or who may call him.”
In this case, Tribe once again agrees with the dissenters; however, Olmstead was overturned in 1967 (Katz V. US)  and it was Tribe who, as a law clerk to Justice  Potter Stewart helped to write the majority opinion which included the famous phrase, “The Fourth Amendment protects people, not places.”). But this privacy issue continues to bubble and befuddle: in this session, the Supreme Court will review whether the government, without a court warrant, may track suspects’ movements by hiding a GPS device on their vehicle.
Tribe sums up, “… “Olmstead” mindlessly read a new technology out of the Constitution, while “Craig” absent-mindedly read a new technology into the Constitution. But both decisions had the structural effect of withholding the protections of the Bill of Rights from threats made possible by new information technologies.”

In other words, Technology is a double-edged sword and we must be mindful of its use.
Can’t argue with that.

Sunday, September 11, 2011

Sept 11, 2011 Plus and Minus

Sept 11, 2011 Plus and Minus

On this somber day, let me add my voice to those who have vowed to “never forget” the atrocities that occurred ten years ago and to work toward a better, more peaceful world. I was teaching a computer lab on 9/11/2001 and, as usual, not every student was being studious and following the worksheet instructions. In fact several were on the Internet and following the news. That’s how I first learned at 9:15 AM that an airplane had crashed into one of the World Trade towers setting it afire. As the drama unfolded, the second tower was hit and it was apparent we were under attack.  The students were agitated (especially a young man from Egypt) and  I wrestled with the decision  to dismiss class or to continue it.  I finally decided to continue because there was nothing we could do about the situation just then and I thought the lab exercises would help take the student’s minds off the tragedy at least for a little while. After class we could all share our feelings with our friends and begin the slow process of grieving, acceptance and resolve to insure that this would never happen again.

That said, I want to explore the question: Does computer technology bring us together or does it isolate us? In the short story, “The Machine Stops” by E.M. Forster, written about 100 years ago, all humanity lives underground (presumably because of some environmental disaster) in separate hive-like cells. All of their needs are attended to by the “Machine” --- what we’d currently call a Computer. Everyone has a “plate” in their room through which they can communicate with everyone else on the planet --- today we’d call that the Internet or the Web.  Forster’s cautionary tale is a dire warning against becoming so isolated from each other and so dependent on a Machine so complex that no single person fully understands how it works or how to fix it if it breaks. Hello? Does any of this sound familiar? As you might guess, the outcome of this story is not pleasant but it does end on the hopeful note that we might be able to start over and recover our humanity.

On a related note, a recent Speak Out contributor has pointed out that while technology has certainly made many of our tasks easier, it is also responsible for displacing jobs formerly held by human beings. This raises an interesting question: does the automation made possible by technology decrease the amount of human jobs or does it actually increase them? Does it destroy more than it creates?

Certainly, one could argue that the quality and the pay of the jobs that automation creates are usually better than the jobs they replace.  A software engineer who uses computer technology to design automobiles is a much more creative and high-paying job than the assembly-line worker (who is also being replaced by technology). However, it is not generally possible to retrain a worker on the line to become a software engineer. And the situation is more muddled. Even the software designer relies on automated tools to make her/his job easier and more effective and no humans are displaced in the process. And, to make matters more muddy, there is no definitive research data indicating whether automation, in the long run, creates more jobs than it destroys. There is, however some research which seems to confirm the hypothesis that automation is more beneficial to skilled  workers with more education and/or experience than to unskilled ones. This means that those least likely to benefit from automation are those that probably have the greater needs. Not such a bad situation if you believe in Ayn Rand’s economic philosophy but a terrible state if you tend more towards Karl Marx. In any case since computers are much more effective than humans in performing repetitive and often boring jobs, the writing seems to be on the wall.

It is also clear that this is a complex problem, so it is well to remember the words of that great journalist/philosopher Henry  Lewis  Mencken, “For every complex problem there is an answer that is clear, simple, and wrong.”

Sunday, August 14, 2011

August 14, 2011 The Purpose of Computing

August 14 Column: The Purpose of Computing

Early in my career as a programmer, I was working on a project that required using some mathematical techniques from the text, “Numerical Methods for Scientists and Engineersby R. W.  Hamming.  I  have forgotten which methods I actually used but remember vividly his inscription: “The Purpose of computing is Insight, not Numbers.” That was a cautionary aphorism because all of the software being developed at that time produced pounds and pounds of paper filled with columns of numbers and it was easy to lose sight of the real purpose: to gain insight into the solution of  some problem.

Today, it seems that picture has radically changed. Computer systems are no longer behemoths filling whole rooms with dedicated cooling systems requiring a staff of operators and programmers.  They have evolved into much much faster, smaller machines that can be used by everyone from pre-teens to seniors. In a sense, computers have been democratized and socialized --- their power now flows directly into the hands of the people. This is a Good Thing, right?  Maybe yes, maybe no. First we must ask ourselves what is the purpose that computers fulfill in our society. Is Hamming’s insight still valid? Well, modern computers certainly make oodles of information available to us via the Internet and one could reasonably argue that we can use this information to gain insights into problems ranging from medical advice to how to get into and out of kayaks.  On the other hand,  we can also access informational distractions (e.g. gossip, pornography, etc.) whose only purpose is to  provide us with a diversion from boredom.

For example, I recently attended a wedding and the group at our table were swapping ideas for summer reading. I had recently  heard a radio discussion of a book that I had not yet read but sounded intriguing --- it was about a future where most all forms of cancer had been cured which, on its face, sounds like another Good Thing, right? Wrong! The novel elaborates some of the unintended consequences of this event which include destructive bands of jobless young men harassing the new cohort of older people who are not retiring to make room for the next generation  because cancer has been cured and most everyone is living longer. Unfortunately I could not remember the title or the author of the book. However I did remember that the author was also a movie actor and producer who reminded me of the comedian Al Franken and that he had appeared in a movie with Holly Hunter with the title, “Network News” or something like that and that he usually played the role of a loser. While I was sharing these rambling thoughts my niece was using  Google on her iPhone and had already found the author and the title of the book: “2030”, by Albert Brooks.

My question is: what insight had been gained from this process?  It did scratch the itch of curiosity and it did provide the practical information needed to access the book in question but what did we learn and how did we change for the better? Perhaps this is asking too much from a mere machine but  surely we expect more from ourselves. Does our computer technology  actually promote a  sort of mental laziness? And if it does, should we update Hamming’s aphorism to “The purpose of computing is titillation not insight” ?

I think not.  According to Jaron Lanier, author of “You are not a Gadget”, the purpose of digital technology is to enrich human interaction. In one sense, the human interaction at our wedding table was enhanced by the digital technology of Google and the iPhone via the Internet. It did scratch the itch of curiosity and it did provide the practical information needed to access the book in question but only in a shallow way.

If the computer is to become a  partner in our evolution, it will have to enhance our existence beyond petty pandering to our desire to alleviate ennui. It will have to provide us with more than arousing diversions. To mashup Hamming and Lanier, “The purpose of any technology is to enhance our humanity, not degrade it.”

Wednesday, July 6, 2011

July 10, 2011 The Google Books Project

The Google Books Project

In the February 5, 2007 issue of the New Yorker magazine Jeffrey Toobin wrote an article, “Google’s Moon Shot” which describes Google’s travails with its ”Google Books Project” .I had run across a descripton of this project around 2003 and, at the time, I thought it was the coolest thing since painless dentistry and the invention of sunglasses. However , after reading Toobin’s article, I was made to realize that this was a more complicated situation than I had previously thought. And most all of the issues center around copyright law and the relatively new concept of “intellectual property”. The following is what I wrote about it at that time.

The Google Books Project is an attempt to recreate the Library at Alexandria but with a major major improvement.  Instead of millions of books standing alone on the shelves allowing access only by primitive filing systems, Google scans and digitizes every book into computer-accessible format. Then books in the library can be linked together in the same way Google can search and link web sites.  Imagine being able to read a book with clickable links --- you’d run the risk of never finishing it! But it would allow you to create your personal digital library based on themes like heirloom apple pie recipes or pre-Byzantine sex perversions (just joking) instead of by individual volumes as books are currently constructed.   Whether Google can pull off this ambitious project remains to be seen.

There are also some interesting copyright problems that arise with the authors and the publishers of books, some of whom are contributing to the project while at the same time, they are pursuing copyright infringement lawsuits.  This strange situation arises from an ancient desire: here’s a hint --- what is the root of all evil?  Right, the greed for money.  It’s all about ownership rights and who gets to profit by how much.  Here’s how it works:
Google is busily scanning tens of thousands of  books into their database (i.e. Library) each week from several major libraries (Stanford, Harvard, Oxford as well as the New York Public Library). What do these libraries have to gain from this? Google has contracts with its libraries that stipulate that the library gets its entire collection digitized at no cost  and a free electronic copy of each book Google also has contracts with nearly every major American publisher which stipulate that when one of these publishers’ books is called up in response to a user’s search query, Google displays a portion of text --- snippets of each chapter usually – and provides links to the publisher’s website and sites like Amazon where the user can buy the book.

The formal issue in the court is that the authors and publishers claim that the scanning part of the project violates copyright law  and that Google will make money from their ads and they want to make sure that they get a good share of this potential revenue pool.  In other words, all of the stakeholders in this drama are fighting for their piece of the pie while technology is blithely shredding the established relationships between artists and their distributribution systems.

After five years of litigation, in October 2008, Google finally reached a settlement with the authors and the publishers.
“Under the agreement, Google offered to pay $125 million and create the framework for a new system that would channel payments from book sales, advertising revenue and other fees to authors and publishers, with Google collecting a cut.  But in March 2011, a federal judge in New York rejected the company's $125 million class-action settlement with authors and publishers, saying the deal went too far in granting Google rights to exploit books without permission from copyright owners.”
As of  July 3, “A hearing on the Google Books settlement has been postponed to July with no indication that the parties have been able to overcome the thorny issue that led a judge to strike down the original settlement. “
This is indeed a thorny issue and it will be in interesting to follow the drama of  “Whither Google Books?”as it unfolds.

Sunday, June 12, 2011

June 12, 2011 Computer Models

Computer Models

In this column I want to discuss computer models and especially those that model Climate Change but  first let’s understand just what we mean by a model and then we can define a computer model. By a model we mean any representation of a real thing and not the thing itself. We can have physical  models like model ships or airplanes or we can have abstract models of phenomena like population growth, how the brain works, and weather systems. This raises a new question: what is an “abstract” model?

If I were to tell you that the next paragraph is going to be very “abstract” --- what is your first gut reaction?  Anxiety? Pain?  Despair?

Well, those are perfectly normal reactions but only because the term “abstract” has got a bum rap. It has the connotation of something difficult and complex, requiring a lot of thought but in fact, it is just the opposite. An abstraction is a simplification. For example, the graphic shows an abstraction of an evergreen tree. Notice that all of its features, except for its shape and color have been abstracted (extracted) away so that we can concentrate on just those two properties of  all evergreens. In reality, evergreen trees are much more complicated than our abstraction --- to name just a few properties we have abstracted away we could list its smell, height and girth, not to mention details like twigs, bark and needles.

So, of what use are abstractions? Since they simplify an otherwise complex system, they are easier to understand and easier to make  predictions about the behavior of the simplified system. The crucial part of building an abstract model is the decision about which properties of the system to include and which to ignore..For example, in a weather forecast model we know that the temperature, pressure and wind velocity at every point in the forecast volume is essential --- and the more points we have for this data, the better will be our forecast.  But how about the phase of the moon? Should we include that in the model? Probably not, but we would certainly include it in in a model of tidal flow. So it is very important to identify which properties to include and which to exclude in an abstract model. Finally, an abstract model usually uses an abstract language like mathematics to represent the system being modeled.

Now that we know what an abstract model is (I hope) we may define a computer model as a particular kind of abstract model that uses an algorithm to guide a computer toward the solution of a problem.  If the problem we are solving is weather prediction and the algorithm ( a well-defined, step-by-step process --- like a recipe for apple pie) can be translated into a language that the computer can understand and execute then this is called simulation. But rather than an apple pie, we get weather prediction.

This is why a computer Climate model is so difficult to construct. First we have to identify the germane variables and leave out the irrelevant ones. Even now, scientists are not completely sure of the effects of cloud cover on the weather, let alone the phase of the moon. Then we have to develop the mathematical relationships between the salient variables. Then we need to convert the math to a computer language and then we can run or execute the model on an actual computer. Finally we need to test the model: is it valid and how accurate are its predictions?

It’s fairly straighforward to test the predictive power of small systems like automobiles. We can compare the results of actual crash tests with the damage predictions of the model and evaluate on that basis. Once we are convinced the model works, at some point we can eliminate the actual crash tests as running the model is faster and cheaper and allows us to try design changes more quickly and easily. Unfortunately we cannot set  up controlled lab experiments on such a large system as Climate Change. But there is a way.

One way scientists test their predictive (e.g. Climate Change) models is to use past data to predict the present situation.  If you have climate data going back, say 100 years, you can plug it into the model and see how well it predicts the climate now. If the results look good, this increases your confidence in the model’s  prediction for the next 10 to 20 years.  Like Science itself, Climate Change models are evolving and making  better, more accurate predictions. 

The final way we test our models is in the marketplace of ideas. If the overwhelming majority of the expert  scientific community are convinced of the validity of a particular model,  then it is likely our decision makers will also be persuaded. But this is not always the case; sometimes politics trumps logic. But just as we all  hope that Good will eventually triumph over Evil, I continue to hope that reason will rise above politics and that the good work of scientists worldwide will preserve our planet and our species. As the TV pundits say, “Only time will tell.” 

Let’s hope that as we create our future environment we are guided by reason and science rather than those with a personal economic or political agenda.

Saturday, May 7, 2011

May 8, 2011 Privacy is Golden

iPhone Tracks User

Curses, foiled again.  

Once again, this is not the column I had planned to write for today due to fast-breaking current events that raise more timely and thus interesting issues. Before I am castigated by AARP and a cohort of irate senior citizens (of whom I consider myself one),  let me hasten to add that I don’t think that newer is necessarily better than older, except  with regard to news, newer news is better --- hence the terminology, “news”. Also, before I begin, I would like to thank my former student, Kevin St. Germain for sending an email reminding me that this was an important Privacy issue worth discussing here.

I am referring to the latest Apple controversy which began  to appear in online forums about April 10 and exploded into mainstream media on or about April 21 ---  a “scandal” to some, a “brouhaha” to others. The issue can be succinctly summed up by David Pogue’s headline in New York Times column as, “Your iPhone Is Tracking You. So What?” (http://pogue.blogs.nytimes.com/?nl=technology&emc=ctb1). Pogue cleverly states the issue and his solution in seven words. The details take a bit longer: If you own a device that uses iOS which is the operating system in the iPhone and iPads (with a cellular option) then, when the device is on and even when your settings indicate otherwise, your latitude and longitude location is being recorded via the closest wi-fi hot spots and cell phone towers as you travel about. The argument by those who did not agree with Pogue’s interpretation was that Apple was storing your locations unencrypted, and copying them, unencrypted, to your computer and to iTunes (an Apple  internet store and content manager for iOS devices), without your explicit permission and, to top it all off,  not saying what they were doing (or going to do) with that information.
Some business publications have a strong theory about what they are going to do. They say that both Apple and Google use their smartphones to transmit their locations back their servers as part of their competition to gain control of the projected 8 billion dollar market by 2014 in location-based services (see: http://keepamericafree.com/?p=395)
 such as targeted advertising, vehicle tracking and even dating (see Wikipedia for a full range of examples that includes health, entertainment, work and personal life applications).

After about a week of nonresponse (some might say stonewalling), Apple conceeded that mistakes were made but there was no nefarious intent to infringe on users’ privacy. Further, they pledged to insure the anonymity of the location data by encrypting it on the user’s computers and cell phones, limiting its storage to one week rather than one year, and fixing the bug in the Locations Settings so that location transmission can be turned off.  In a final effort to soothe users Apple issued the statement,  “Users are confused, partly because creators of this new technology (including Apple) have not provided enough education about these issues to date.’’ Way to go Apple, this method of blaming the victims and insulting their intelligence has been used by generations of ineffectual teachers. Even if Apple fulfills its promises, police departments can use software that neatly dumps the contents of your cellphone memory for future investigation (see: http://www.theregister.co.uk/2011/04/21/police_cellphone_searches/) --- so not only don’t drink and drive, don’t carry your cell with you…

In any case, Pogue argues that the location data is too crude to actually place you with any accuracy and further opines, “ Now, I’ve been in this job long enough to know that there’s a privacy-paranoia gene. Some people have it, some don’t. I don’t. I have nothing to hide. Who cares if anyone knows where I’ve been? “
This is a common argument against privacy rights: if you haven’t been breaking the law or doing anything bad, then what have you to fear? A (possibly paranoid) privacy advocate might respond, “that’s not the issue --- privacy is the right to be left alone and being watched changes one’s behavior (try entering “pantopticon” into your favorite search engine). Furthermore, you are confusing privacy with secrecy: it’s no secret what I do when I go to the bathroom but I still would like my privacy while I’m doing it.” How would You like to have your iPhone act as a survelliance device while en toilette? It’s an interesting ethical question and lots (and lots) more counterarguments to Pogue’s opinion can be found by the Google search engine if you just type, “if you’ve done nothing wrong”.

But the core problem may not be an ethical one --- it is simply a bad business policy. In the Terms and Conditions of Apple’s end user agreement is the seemingly innocuos fragment, “… we may share geographic location with application providers when you opt in to their location services."

Just what is meant by the term, “opt in”?  Privacy advocates prefer an opt-in policy because it requires the organization (e.g. Apple) to ask your permission up front to use any data you provide to it. Use includes sharing your data with other organizations. The organizations themselves prefer an “opt out” policy (Apple’s current policy) which means that it is your responsibility to explicity forbid the organization from using or sharing your data (e.g. your location file on the iPhone). In other words, under an opt in policy, doing nothing automatically opts you out while doing nothing under an opt out policy automatically opts you in. Which would you prefer?

Perhaps if Apple had adopted the opt in policy in its own User Agreement, this problem would never have arisen. 

Followup Links:

Some last minute thoughts:
Is society moving towards a scenario I read over 20 years ago whereby all citizens would be required by law to wear video-camera helmets at all times?
The idea was that this would prevent most all crime because everyone would be filming everyone else all of the time and the data would be streaming into a collosally gigantic database. Not only would everyone know where everyone else was located but they could also see what they were doing while they were there!  Thus if a crime were committed  the evidence would be accessible in this database and criminal(s) could be quickly apprehended and isolated form the rest of society. Further, if anyone removes their helmet, a warning signal is also sent to the database/law enforcement administrators so that person immediatley becomes a criminal subject to arrest or death. Whether the author was satirical or serious I don’t know as I’ve lost the refrerence to the article but here is a good example where societal security completely trumps personal freeedom and makes a complete mockery of the very concept of privacy.
We may be approaching that society with the advent of the look-see (spelled looxcie) which looks like a bluetooth device hanging on your ear. http://technabob.com/blog/2010/09/15/looxcie-wearable-ca  In this case, the device is not mandated by the government but,according to their ad copy is targeted toward, “voyeurs and narcissists” so here the outcome is reversed; ie this is a complete trumping of personal freedom over other people’s privacy.

Sunday, April 10, 2011

April 10, 2011 Information Please...

Let me be honest. This is not the column I had planned to write for today. I had been doing research on computer simulation and especially climate change models. But then on Saturday March 19, two things changed. First I read Lois Clermont’s editorial, “Internet complicates decisions” and then I read the New York Times’ review by Geoffrey Nunberg  of  “The Information” by James Gleick --- which led me to related links which led me to…this column.

So, in the interest of keeping up with fast-breaking current events (due to the Internet and one of my favorite newspapers), I have decided to change course, but only a bit. Look for the computer modeling and simulation article next time in a PR near you.

As you may recall, Lois was pointing out the double-edged nature of the Internet version of the newspaper. Because of its speed and omnipresence, the news can be constantly and almost continuously updated and this can magnify misunderstandings as well as clarify and deliver timely information.

Of the four categories that I proposed in my first column:
(Personal Freedom vs Societal Security
Intellectual Property Rights vs Freedom of Expression
Dehumanization/ReHumanization and Loss of Autonomy
Artificial Intelligence and the Limits of Technology) I would have to place this issue into the third: Dehumanization/ReHumanization and Loss of Autonomy. While there is a certain loss of control when we rely on the Internet, does it also dehumanize our interactions or is it a more benign rehumanization?  After all, we humans have a long history of being reshaped by our technology --- we create and shape our technology and it returns the favor.  One only has to consider society  before the advent of the automobile, air-conditioning, and television to see their rehumanization effects. Even Time itself has been reshaped by the technology of the Railroads which demanded a global time structure so that “the trains would run on time”.

But it is not just Time that the Internet has warped, it is also Space. In his new book, James Gleick has chosen the startling title, “The Information” not merely “Information” to stress the universal, ubiquitous nature of the stuff. We know the Internet inundates us with this stuff (some wag used as a metaphor for the Internet a gigantic library full of information --- but instead of being organized into books on shelves with indexes to reach the right book, it is like the indexes had been destroyed every page ripped from every book laying in a huge hodge-podge heap of paper in the main lobby.  Of course, our library and computer scientists are working hard to rectify this problem but the size of the problem seems to be growing faster than they can address it; in fact, as soon as I post this column on my Tec-Soc blog and it appears on the PR blog, the size of the Internet will have increased even further. And so I am in the paradoxical position of enlarging (ever so slightly) the problem that I am describing. But I do have one final point I wish to make.

How the Internet has reshaped our notions of time and space may be small potatoes compared to the practical issues raised by Neil Postman In his paper, “Informing Ourselves to Death”,  written way back in 1990.  He makes a passionate case against the information glut made possible by computer technology:

“If you and your spouse are unhappy together, and end your marriage in divorce, will it happen because of a lack of information?  If your children misbehave and bring shame to
your family, does it happen because of a lack of information?  If someone in your family has a mental breakdown, , will it happen because of a lack of information?”

He does ameliorate this strong stance with a more measured assessment:

“Anyone who has studied the history of technology knows that
technological change is always a Faustian bargain: Technology giveth
and technology taketh away, and not always in equal measure.  A new
technology sometimes creates more than it destroys.  Sometimes, it
destroys more than it creates.  But it is never one-sided.”

That said, I can end my contribution to the current size of the Internet.

Sunday, March 13, 2011

March 13, 2011 In Technology We Trust

When I was a young man I worked as a consultant for the US Navy. One of my first programming assignments was to produce a printed report of some shipboard equipment in order of it’s cost-effectiveness for “the Admiral”. “OK, but I’ll need the algorithm for computing the cost-effectiveness first.” I responded. “Oh, don’t worry about that --- we’ll give you those figures before you write the program.”  “So,” I replied, “you want to use the computer as a printing press --- why not just have our secretary type up the list --- it’d be a lot cheaper.” (Computer time was 360 dollars per hour back in the mid-sixties). “No, no , no,” was the response, “This report  will have much much more credibility if it comes off the computer.”

Unfortunately, this attitude has not changed much in 50 years. There is an integrity and a legitimacy about the printed page that defies reason, especially if it’s printed by an esoteric technology like a computer --- it’s as if our secondary  national motto has become, “In Technology We Trust.” The computer in particular seems to carry an authority on a par with judges, ministers, scientists and government officials.  I was reminded of this interesting flaw in human nature by the Nov 19 PR headline, “Government insists full-body scanners are safe”. The gist of the article is that some of us worry that the X-rays used by the full-body scanners at airports are not without risk and airline pilots in particular are concerned about the cancer risks.  In this instance the technology in question is an X-ray device and not a computing machine but the same sort of naivety is at work here --- we’re supposed to trust the pronouncements of the appropriate authorities that these machines are safe.  However, the article goes on to quote a physics professor as saying, “The thing that worries me the most is what happens if the thing fails in some way and emits too much radiation.”  And large doses of radiation can  cause cancer. What are the odds of this happening? It’s already happened.

In the early 1980s, several hospitals used a radiation machine called the Therac25 to provide therapy for cancer patients. Instead of helping them, it caused several deaths and burn injuries due to a software bug. There is not enough space here to describe scenario, who the stakeholders were and who was blameworthy but the following three links will be helpful: the first describes the Therac briefly in a list of other software disasters (including a shockingly similar one 15 years later), the second is a more detailed description in Wikipedia, and the third is a link to a list of other links from a Google search.

So, I remain wary of “official” reports stressing the safety of any potentially dangerous technology not only because of documented cases like Therac but from personal experience. As I mentioned above, at an earlier juncture in my career I worked for the Navy as a civilian. In a section of Engineers and Physicists I was the only Mathematician and so it fell to me to write programs to process the effects of the shock wave resulting from underwater explosions on critical shipboard systems .The most exciting part of the job was crawling around in the bilges of destroyers and installing the instrumentation that would gather the shock wave data. Later I would digitize and process it with my programs.

Some of the ships carried nuclear material and so we were all issued special badges to record the amount of radiation we were exposed to on each visit as a safety precaution. So far, so good. At that time, we were told that was radiation was not cumulative over long spans of time so we got fresh badges each time we went to work. As we now know, it is cumulative and maybe the radiation I accumulated made me smarter, stronger and more handsome but I doubt it. The take-away is that the government is not evil but it can be ignorant.

So, while it is true that we can never know the long term effects of any technology with perfect certainty, we can still demand the best efforts from our scientists and technologists to investigate potential safety issues before they unleash it on the public.  Have we made any progress in the last 50 years? Some perhaps but, as one of my colleagues used to say when we were working together way back then, “If your car acted like your computer does, you’d sell the *%#!! thing in a minute! “.  As our French friends say, “Plus ça change, plus c'est la même chose.”

Sunday, February 13, 2011

Feb 13, 2011 Security vs Privacy

In this column we’ll take a look at the Blackberry mobile phone (manufactured by Research In Motion or RIM) controversy in Saudi, United Arab Emirates (UAE),  and India that was roiling several months ago. (This column appears monthly and although this dispute may appear to be “old news”, the issues raised are not. ) Briefly, this was the situation:

Starting in late July, 2010 headlines like these began to appear on the news service feeds:

July 29:  India threatens to ban BlackBerry services

Aug. 1: UAE announces ban of BlackBerry services starting
October 11, 2010

Aug. 6:  Secretary of State Hillary Clinton says BlackBerry ban violates “right of free use”

Prior to that, you may have missed these two headlines:
November, 2007: RIM provides its encryption keys to Russia’s Mobile TeleSystems
January, 2008: RIM China announces sales go through after making sure phones were no threat to China’s communications networks

In the above two cases, RIM claims that it was only adhering to the laws of the country in which it was doing business and that it “respects both the regulatory requirements of government and the security and privacy needs of corporations and consumers.”

 So, what was the problem that caused several countries whose total populations top 2.5 billion to rip into RIM and threaten to ban its popular Blackberry cellphone? This one falls neatly into the Personal Freedom (in this case, Personal Privacy) vs Societal Security issue. In a nutshell, India and UAE want the same favors which were previously granted by RIM to Russia and China which are the encryption keys so they can crack coded messages sent from the Blackberry.  All, of course, in the interests of  National Securtiy.  Insiders believe that RIM has reached an accommodation with the Indian government that includes access to most Blackberry communications except for the “Enterprise” option where the decryption keys are controlled by the individual subscriber companies --- and in that case the government would negotiate with subscriber companies directly.

Now, if you are mostly concerned with Societal Security you would say that a sovereign nation has the right to spy on selected citizens (e.g. suspected terrorists) when it has evidence of a probable attack. On the other hand, if Privacy is your major concern, you would probably characterize the situation differently: the fourth amendment does protect citizens against unreasonable search procedures and this is nothing more or less than cellphone hacking by Big Brother.

One of main reasons I bring up this issue is that it is, in the words of that great American philosopher Lawrence Peter Berra:,”Deja-vu all over again”. Almost 20 years ago, here in the US we experienced the same drama.  At that time our government viewed strong encryption software in the same category as arms or weapons that could not be exported to suspect nations. While the software companies that produced this software must have been flattered by their imputed power, they did not much like the export restrictions and what they perceived as unfair competition of foreign competitors whose governments did not place the same restrictions on them. Even the authors of the encryption algorithms could not publish them or even freely make them public. Things looked bleak for Privacy advocates as the US government made plans for media companies like AT&T to be required  to include a “backdoor” into their encrypted communication devices so that the FBI and NSA could decrypt messages much like they already did (and still do) with wiretaps. Even now, according to the Electronic Frontier Foundation :
“The FBI is on a charmoffensive, seeking to ease its ability to spy on Americans by expanding the reach of the Communications Assistance to Law Enforcement Act (CALEA). Among other things, the government appears to be seriously discussing a new requirement that all communications systems be easily wiretappable by mandating "back doors" into any encryption systems.”
(Source: https://www.eff.org/deeplinks/2010/10/eight-epic-failures-regulating-cryptography)

Perhaps it’s not deja-vu, it’s “what goes around comes around” that’s at work here. Or as  attributed to the poet Edna St Vincent Milay, “Life is not one thing after another. It’s the same damn thing over and over!”


Personally, I tend to side with the Pro-Privacy Proponents; the eight reasons given at the above link would be enough to convince me but here is the clincher which is an excerpt from a blog comment by Prasanto K. Roy, Chief Editor Dataquest Group magazines based in India, who writes: (keeping in mind that email is a standard option on mobile phones)


“You're a Delhi-based wannabe terrorist needing to communicate with your handlers. What do you do?
Invisible-ink notes are passe, as are carrier pigeons. You will, of course, use electronic options.
Like email. Walk into a cyber cafe, log into a Gmail or Yahoo account. Don't use an account in your own name. And don't send email. Simply read instructions left for you in an unsent mail, saved as a draft in your account. And then, to reply, just edit the unsent email, and save it back as a draft. If email isn't traveling, it can't be intercepted.”


Pretty cool and pretty scary. Most security precautions can eventually be thwarted and some can even make things worse; as a Zen Master has said, “The best way to clear up muddy water is to leave it alone.”

Sunday, January 9, 2011

Jan 9, 2011 MetaIssues

Technology and Society

In the previous column I suggested the following four categories to help explore the relationships between Technology and Society:

Personal Freedom vs Societal Security
Intellectual Property Rights vs Freedom of Expression
Dehumanization/ReHumanization and Loss of Autonomy
Artificial Intelligence and the Limits of Technology

As I indicated then, these categories are somewhat oversimplified because they cannot capture all of the complex and changing relationships between computers and society, and so we would expect some problems that do not neatly fit into our four categories.
 Here is an example, in the form of a joke, that I received from a friend via email:
“A Minneapolis couple decided to go to Florida to thaw out during a particularly icy winter. They planned to stay at the same hotel where they spent their honeymoon 40 years earlier. Because of hectic schedules, it was difficult to coordinate their travel schedules. So,the husband left Minnesota and flew to Florida and his wife was going to fly down on the next day. The husband checked into the hotel. There was a computer in his room, so he decided to send an Email to his wife. However, he accidentally left out one letter in her Email address and without realizing his error, sent the Email. Meanwhile, somewhere in Houston, a widow had just returned home from her husband's funeral. He was a minister who was called home to glory following a heart attack. The widow decided to check her Email expecting messages from relatives and friends. After reading the first message, she screamed and fainted. The widow's son rushed into the room, found his mother on the floor, and saw the computer screen which read:

>To: My Loving Wife
> Subject: I've Arrived
I know you're surprised to hear from me. They have computers here now and you’re allowed to send Emails to your loved ones. I've just arrived and have been checked in. I see that everything has been prepared for your arrival tomorrow. Looking forward to seeing you then!  Hope your journey is as uneventful as mine was.
P.S. Sure is freaking hot down here.

Even though this joke is only a few years old, it feels older --- most modern email systems have short cuts or address books that automatically complete our contact’s addresses and so this situation would hardly arise. In fact, statistics gathered by Media Metrix show that email use is declining in favor of texting, twittering and other forms of chat within social networks --- especially in the the 55+ age group. But the joke does raise an important meta-issue beyond the four proposed categories: Has the advent of the Internet caused any real new problems that require new solutions or are the problems not of a different kind, but only of a different degree and thus can be handled quite nicely by existing ethics and laws?  In the case of this joke, mistyping an address or misdialing a phone number is an old problem we had well before the Internet appeared. What makes this a problem (albeit a funny one) is the ease and the speed with which we can communicate that is made possible by the existence of the Internet and how it can magnify seemingly small errors.

Cyber-Bullying is another concrete situation when considering how to handle ethical issues raised by technology. Is the problem soluble by existing moral and ethical rules or do we need new ones to adapt to the changing times? All societies must deal with the bullying problem, cyber or not, if they are to be cohesive and fruitful --- so in that sense we can just apply the existing laws and rules. However, it is also true that technology has made bullying easier and more pervasive, so that must also be taken into consideration. In other words, is this meta-issue a difference in degree or a difference in kind?  Perhaps it is both, but I tend to side with Thomas Jefferson who wrote over 200 years ago, “New circumstances call for new words, new phrases, and transfer of old words to new objects.”  Computer technology which is the foundation of the Internet certainly qualifies as “new circumstances” and, as such, has expanded and accelerated all of our communication networks. As a result, the Internet has both enriched and complicated our lives --- truly a double-edged sword. And there’s no turning back.

Search This Blog