Sunday, November 2, 2014

Distractions


Back in the days when I was teaching Computer Science (which, if we were honest, really should be called “computer studies” --- but that’s another column) the method I used to deal with students who were talking (usually in the last row) was this: I would pause glancing at the offenders and say, “You know, I have this problem: I’m easily distracted. When that happens, I tend to lose my train of thought and it’s a bit of a struggle for me to get back on track. This usually means that I unconsciously make a negative association with the source of the distraction. Now since this negative feeling is embedded in my unconscious, when it comes time to assign final grades for this course and I’m looking at a student whose performance is a tossup between a B+ and an A- and this student is associated with the distraction then of course I’m going to be more likely to go with the B+ on a gut feeling without ever realizing the role of my unconscious in this decision. I just thought I’d like everyone to understand my problem so you could all factor that into your behaviour in class. Now, where was I....

This strategy worked very well and in fact I befriended a few students who appreciated not being called out in front of their classmates. I am telling you this because I have noticed of late much ado-doo on the topic of “distraction” flowing on the Internet lately. Examples range from the frivolous: Hugh Grant said in an interview that the Internet has completely destroyed his attention span. “I can barely get to the end of a tweet without getting bored now.” (http://www.theguardian.com/film/video/2014/oct/09/hugh-grant-the-rewrite-video-interview)

to the scholarly: A PEW Research report entitled, “The Six Types of Twitter Conversations” describing “The Six Structures of Communication Networks. describing a taxonomy for classifying communication networks such as the Internet.

(http://www.pewresearch.org/fact-tank/2014/02/20/the-six-types-of-twitter-conversations)

Twitter, to my mind, symbolizes the quintessential distraction/addiction app. As I have mentioned in a previous column, when I questioned my (then) 13 year old granddaughter why she used Facebook but not Twitter, she said that she only used Facebook a few times in a day but, she noticed that her peers were on Twitter almost continuously and she couldn’t afford that much time for what seemed to her a frivolous activity. Unfortunately, two years later I find that she has a Twitter account just like me. I can justify my account as an “astute observer of technology but my granddaughter has different reasons. She told me that she got a Twitter account because most all of her friends have one and she found herself out of the “loop” regarding stories and info that her friends were sharing. She finds it easier to keep up with the backstory as a Twitter subscriber.

Since this is my beautiful, intelligent granddaughter, I am forced to admit that all who tweet are not necessarily twits. Douglas Coupland also believes some of the best writing in the English language today is being done on Twitter an in the one-star reviews on TripAdvisor. “They aren’t allowed to swear, so they have to be extremely inventive in their attack.” On the other hand he goes on, “I mean my attention span is gone. If anyone tells me that theirs hasn’t, I just assume they are lying. “

(http://www.theguardian.com/books/2014/oct/19/douglas-coupland-hen-party-restaurant-reviewer-tripadvisor)

So, some good can come from sites like Twitter --- even as they are destroying the attention spans of some, they are providing an opportunity to practice writing under some heavily constrained circumstances (e.g. tweets are limited to 140 characters) not unlike the sonnet or haiku forms of poetry. Whether or not users should direct their efforts to more creative activities (such as writing this column), something more useful to society (like volunteering at a soup kitchen), or something that contributes to spiritual growth (like prayer, meditation or yoga) is another question.

Sunday, October 12, 2014

The Sorrows of Technology



Teachers often extol  “the joy of learning” and, while they certainly have every right to do so, they almost never mention the other side of the coin: “the pain of learning”. My guess is that many folks probably experienced the pain more often than the joy and most teachers the joy rather than the pain. I believe that the same analysis can be applied to Technology: while tech affords us much convenience and even joy, it also can be a source of great frustration and pain. I’m sure we all have our stories. Here is mine:

Recently my wife and I decided we needed a fax machine and, after a bit of research, purchased one at a Local Purveyor of Things Electronic. Turns out, in order to get a good price on the fax, you have to buy a whole package: Printer, Scanner, Copier, and Fax.

Before, I begin this sad story of pain, frustration and woe, let me just mention that I had also bought a new Panasonic phone system that had lots of bells and whistles including a built-in answering machine.  Somehow, during installation, the phone system decided that I had an unanswered voicemail at Charter, my phone service provider, and that I should enter my code to access it. Well, it’s an annoying message but I think I can live with it because I know that I told Charter long ago to cancel that service so I could use the phone’s built-in answering machine.

What I did not foresee was that my phones would keep flashing day and night helpfully reminding me to call my Charter voicemail. Three phones continually flashing is very nettling so I decide to call Panasonic to help remedy this  situation. After many frustrating minutes, I eventually reach a technical person who gives me the fix and it works --- until next morning when I discover all the phones busily flashing again. Well, I think, maybe there really is a message in my voicemail at Charter so I give them a call only to find that my mailbox has been disabled per my previous instructions: I have no mail because I have no mailbox. Also I learn during the course of the conversation that the best way to go is to disable my home phone’s answering machine and restore my voicemail at Charter and Oh, by the way I will also  need to apply for a “distinctive ring” thru Billing as a Tech Service guy can’t add “new” services all by themselves. A distinctive ring is a special ring for an incoming fax and will prevent the voicemail from intercepting it.

I tell Charter that I will think about their solution and will get back to them after I’ve tested the Fax machine to make sure it works.  So I call my son-in-law in Iowa and ask him to respond if he gets my Fax. That works OK but he recommends a new solution. He uses a website  called efax.com that provides a free fax service for receiving only ---  but sending costs. It even sounds simple to use: you go to their website to get a fake fax phone number which you give to anyone who wishes to send you a fax. Instead the fax goes to your account on their website where you can read it, download it, modify and print it and send it back to them over your own fax. After many more hours of hunting down the free version of efax (which includes a long online chat with representative Lindsey) I manage to set up the receive-only fax account.

This hybrid solution of receiving faxes via the Internet and sending them via my fax machine over my phone lines, as cumbersome as this bi-functional system sounds, seems the best way for me to go. I don’t need a second phone line and it only ties up the single line when I’m sending a fax at a time of my, not the caller’s, choosing. I rationalize it in the same way I use my wireless car door key/lock: When I exit the car I use the rocker switch on the armrest to lock the car but when I return I use the wireless on the key to open the door.

Sunday, August 10, 2014

Disruptive Technology








Just when I was getting sick and tired of hearing the word, “disruption” associated with internet technology, I stumbled across a New Yorker June 23 article “The Disruption Machine” by Jill Lepore The gist of the article is a summary, analysis and criticism of Clayton M. Christensen’s book, “The Innovator’s Dilemma” which makes the seemingly paradoxical claim that, “doing the right thing is the wrong thing”. I also learned that the term “innovation” was not always associated with the idea of “progress” (which is a good thing). Innovation used to be associated with novelty which made it seem somewhat frivolous. Even the father of our country, George Washington was reputed to have said, “Beware of innovation in politics”. So it becomes necessary to more precisely define what we mean by innovation and especially the new buzzword: Disruptive Innovation.


From Wikipedia we find these definitions:


“Innovation is finding a better way of doing something. Innovation can be viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.This is accomplished through more effective products, processes, services, technologies, or ideas that are readily available to markets, governments and society. The term innovation can be defined as something original and, as a consequence, new, that "breaks into" the market or society.


A disruptive innovation is an innovation that helps create a new market and value network, and eventually disrupts an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in a new market and later by lowering prices in the existing market.”


So what then is “disruptive technology””? This term is pretty much a synonym for “disruptive innovation” as described above; however, I would add that for a technology to be truly disruptive, not only would it adhere to that definition, it should sweep through society on a tidal wave of change.


For example, Air Conditioning changed society by massively increasing worker productivity which, in turn propagated prosperity throughout all levels of society. Atlanta, Georgia was a one-horse town before A/C; even Washington DC as late as the 1950s would dismiss all government workers when the temperature exceeded 100 degrees.( I remember laying awake, unable to sleep, one Halloween night in DC when the temp reached that mark.) Another common example is the Personal Computer. When the PC first appeared in the late 1970s, it was purchased only by hobbyists who could build them from kits much like the early automobiles were aimed at those who could put them together and repair them on their own. And, of course the PC allowed the middle-class consumer to connect to the Internet . Until that time, the Internet was a government-funded project using large mainframe computers used exclusively by scientists to share their research. While that’s still true, the Internet has expanded to provide infotainment (like just TV), business transactions such as shopping, banking and investing and social networks like Twitter,Tumblr and Facebook --- as well as planning and organizing political revolutions worldwide!


The other important feature of disruptive innovation or technology is that there must also be a “disruptee” --- the technology or enterprise that has been disrupted. For example, the PC disruptees were the industries that produced only midsize and large mainframe computer systems (IBM almost went out of business during that period.)


Another example is Henry Ford’s Model T automobile where the disruptee was not really the Horse and Carriage but the already existing automobiles that were too expensive for the average American. That and the fact that it changed the transportation market (e.g. train travel declined) as well as the social fabric (families no longer were so rooted to the place where they were born), the birth of suburbia and creation of all the attendant businesses --- not to mention the growth of nonrenewable fuels leading to climate change.


More examples of the two players in the Disruptive Technology game (which includes Wikipedia itself) can be found at:

http://en.wikipedia.org/wiki/Disruptive_innovation#Practical_example_of_disruption

Sunday, July 13, 2014

The Right to be Forgotten?



By now I would hope that most everyone knows that all US citizens have a right to privacy as described by the Fourth amendment to the Constitution, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” This has been interpreted by the Supreme Court as a guarantee that the government will respect your privacy. Even in the Age of the Internet, privacy (described by justice Brandeis as “the right to be let alone”) is still perceived as an important part of our concept of Liberty and Freedom; in fact, in an article by Leo Mirani

(qz.com/228121/these-are-the-things-people-want-google-to-forget-about-them/)

the single most important item cited was, “invasion of privacy”.

Wait, you may be thinking --- is that even possible? Can Google “forget” web links? The short answer is, “We shall see...” because, as Jeffrey Rosen writes in the Stanford Law Review, “At the end of January, the European Commissioner for Justice, Fundamental Rights, and Citizenship, Viviane Reding, announced the European Commission’s proposal to create a sweeping new privacy right—the “right to be forgotten.” The right, which has been hotly debated in Europe for the past few years, has finally been codified as part of a broad new proposed data protection regulation. “ (www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten?em_x=22).

The implications of this new ruling are potentially stupendous mainly because it directly affects two giants of the Internet: Google and Facebook. Rosen goes on, “ Although Reding depicted the new right as a modest expansion of existing data privacy rights, in fact it represents the biggest threat to free speech on the Internet in the coming decade. The right to be forgotten could make Facebook and Google, for example, liable for up to two percent of their global income if they fail to remove photos that people post about themselves and later regret, even if the photos have been widely distributed already. Unless the right is defined more precisely when it is promulgated over the next year or so, it could precipitate a dramatic clash between European and American conceptions of the proper balance between privacy and free speech, leading to a far less open Internet.”

As Reding indicates, from a European perspective this is no big deal; the principle of le droit à l’oubli or “the right to be forgotten” has been part of French law since 2010 (http://en.wikipedia.org/wiki/Right_to_be_forgotten). It is commonly used to expunge the publication of a criminal’s trial and conviction after he or she has served their time. The underlying assumption is that prison is a rehabilitation process and it is cruel to continue punishing someone who has paid their debt to society and just wants to get on with life. Back here in the US, however, there is this pesky First Amendment which not only guarantees citizens the rights of free expression, it is usually deemed the most important of our rights. This means that if I want to point out that my political opponent spent some time in the slammer when he was a juvenile, I am completely free to do so over the multitude of media that are available to me today. Why? Because the First Amendment allows me to do so.

But let us suppose that Google and Facebook follow he European demands in order to stay in business within the EU communities --- what happens when a story or video has already gone viral? How much time, effort and treasure must Google or Facebook expend chasing down and removing these millions of links? And how much time reinstating false positives or removed links that were actually OK as Google did with the Guardian recently? And what about “revenge porn” where the request for removal comes not from the perpetrator of a crime but the victim? But all these questions are just one small piece of the issues involving the complex relationships between privacy, free speech and anonymity. To learn more, just enter “Yelp lawsuits” or “the right to be forgotten + revenge porn” into your favorite search engine and prepare to be amazed.

Sunday, June 8, 2014

A Recipe for Disaster? Part 2


Last month I expressed my concerns with the current trend of recruiting young folks, especially women, to be trained to write programs so that they would have a useful trade with which to enter the World of Work. I mentioned that while I have nothing against teaching anybody the art and craft of programming (in fact, I believe it fosters creativity,confidence and perseverance) I do worry that these short training courses have the potential to do more harm than good. Why would I think such a thing?

Because, as the old saying goes, “A little learning can be a dangerous thing.” Writing a single short program to do a simple application is a creative, rewarding, and relatively straightforward process. But the creation of real-world software is no longer a simple proposition; it requires a well-oiled team of individuals writing code that works well together. While it is possible to teach team skills to novices, it is very very difficult (near impossible I think) to teach one how to write programs that are correct, efficient and reliable.

To be correct, the program must represent an accurate solution to a problem and this is the most difficult and important part of the problem-solving process. In many cases we can end up with a solution to an entirely different problem than the one we wished to solve or other unintended side effects. For example, suppose the problem is to keep food from spoiling, then one solution is to design and implement a refrigerator. I begin by identifying a list of requirements or specifications for the fridge; e.g. “The finished product should be capable of cooling food down from temperature X to,say, temperature X- 32 in Y minutes” and a host of others concerning power, efficiency, size, weight, etc. --- but no matter how hard I try, I cannot write a complete set of specs (if I could I would have to include unlikely ones such as: “ When the user opens the door the food should not fly out of the fridge”. The best I can hope for is the final implementation adheres to my set of incomplete specifications.

To be efficient, a programmer must be able to break a large complex problem down into smaller simpler pieces, and thus by solving each part, the whole problem gets solved. (I can’t eat an apple in one bite but many small bites will do the job.) But that is not enough for a well-designed computer program; for that we must add the requirement that each of the parts be modular.

By “modular” we mean that an executing module does not influence the execution of any of the hundreds (possibly thousands) of any of the other modules comprising the total system. This helps to insure that if we have a group of modules working together to perform a task, we don’t have to test their interactions because truly there will be none. (For example you expect your car to be a modular system --- when you change the oil, you don’t anticipate all four tires to go flat). There are many advantages to modular programs: different members of the programming team can work concurrently on separate modules resulting in a finished product sooner. Also, modules are easy to test, debug and modify (without unintended side effects).

To be reliable a program must comprise modules that correctly execute singly and in combination with all of the possible environments they run in. With most software this environment is provided by you, the user. Again, there is only a finite number of ways users can respond but that finite number is usually so large that it might as well be infinite. (For those with small children: how many times has your child completely frozen your computer as a result of randomly pressing keys?)

All of the above explanation is meant to show how difficult it is to write software. All comprehensive Computer Science Programs require one or more high-level courses in Software Engineering where the above issues are addressed. My concern is that, in the haste to provide well-paying jobs for young people, we may create many more problems than we can solve. And while job creation is certainly a good thing, we should keep in mind that the purpose of education is not just to make a living but to live a life.

Thursday, May 8, 2014

A Recipe for Disaster? Part 1








Of late, there has been much in the media about teaching young people, especially women, to code or, in less arcane language: to program or write software for computers, for example:

code.org http://www.ted.com/talks/mitch_resnick_let_s_teach_kids_to_code

www.wbez.org/.../tackling-tech-gender-gap-teaching-girls-code-1.



The primary justification for this enterprise is, unsurprisingly, economic. The logic is simple and specious: computer technology is ubiquitous so there are lots of jobs related to computers, therefore we can help solve the current lack of jobs problem by scooping up our youth and training them to become programmers. I believe this approach has a strong potential for producing snafus even worse than the initial rollout of Obamacare. Now please don’t think that I believe that teaching kids to program is a complete waste of time. I strongly believe that programming is a rewarding and empowering activity that combines the logical thinking of the mathematician, the creativity of the poet, the pragmatism of the engineer and the patient stubbornness of the detective --- so it is definitely worth studying. But I do worry that we’re pushing this bandwagon for the wrong reasons.


When I first came to teach Computer Science at SUNY in 1978 I had the same idea that it would be useful to teach young people how to program but for entirely different reasons. I thought that learning to program would an excellent technique for teaching problem-solving methods to college freshman. I believed that programming is an empowering, creative activity that, like any good creative activity is immensely absorbing and satisfying.


Things went swimmingly for several years resulting in several papers that I delivered at regional, national and international conferences. But I was starting to have doubts. The question that kept nagging at me was: “Does programming develop general problem-solving skills or is it the other way round --- students who already have good problem-solving skills tend to be good at, and hence drawn to programming?” I could easily see that the good programmers in my classes also had good problem-solving skills but I wasn’t sure which one was the cause and which was the effect. Then, by chance I attended a talk by Edsger W. Dijkstra at Union College and posed my nagging question to him. I, like most of my colleagues, considered Dijkstra, one of the giants of Computer Science. Amongst his many accomplishments, he spearheaded the development of Structured Programming, a methodology that would allow a programmer to produce more reliable programs. He was a great believer in the idea that one should develop a logical plan for an algorithm (the essence of a program) before sitting down and doing the coding. In any case, his answer to my question was upsetting; based on his own experience, he believed that good programmers came to programming as already-formed, good problem solvers. If correct, the very foundation of my research for past several years was fatally flawed and I had perhaps done a great disservice to many of my students.


On further reflection, however, I realized that there was no hard evidence in either direction. The main problem is getting everyone to agree on precisely what they mean by “problem solving” and even if there is some overlap across different definitions there still remains the question: Is it even possible to teach general problem solving? A strong case can be made that while the problem-solving skills of an engineer and a computer programmer seem to be very similar, it is much harder to show the correspondence between a poet’s and an engineer’s problem-solving skills. And, as a formerly trained mathematician, it seems intuitively obvious to me that there is a great deal of overlap between a mathematician’s, physicist’s and poet’s problem-solving skills. At the same time, I noticed that every single one of the students who dropped the course gave me not only the reason for dropping it but hastened to add, “...but it really improved my appreciation for just how difficult it is to write software.” To which I would add, “Yes! --- and especially large software projects like operating systems and even spreadsheets. “ This is still true today: no matter what language you are coding in, it is extremely difficult to write clear, coherent, reliable and economical code. More next month.

Sunday, April 13, 2014

Privacy Concerns make a Comeback








From 1800 to the 1920s, the word “privacy” appeared in publications at a fairly constant but low frequency. Then the rate of citation increased somewhat between 1920 and 1960 followed by a very steep rise (except for mild dip in the early 1980s) through the year 2000.

I gleaned this information using the Google Ngram viewer at:

https://books.google.com/ngrams/graph?content=privacy&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cprivacy%3B%2Cc0

The way it works is explained nicely at: http://en.wikipedia.org/wiki/Google_Ngram_Viewer

Basically, the Google search engine explores its book data base for any word or phrase you enter and creates a graph which displays the relative frequency of that word in its huge books database over the time period you choose. According to Wikipedia:

“The word-search database was created by Google Labs, based originally on 5.2 million books, published between 1500 and 2008, containing 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. “

For example, language researchers have used Ngram to study trends in “mood” words (like exhilarated/apathetic, cheerful/depressed, etc.) and have evidence that American English has become more emotional in the last 50 years. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0059030

If you’d like to play with Ngram, you can also examine how the usage of the words: “kindergarten” and “nursery school” were replaced by “child care’” over the last half-century as well as many other examples at: https://books.google.com/ngrams/info

So, other than the Ngram data, what evidence do I have to make the claim, “Privacy makes a Comeback” ? Unfortunately, the other evidence is weak; it’s anecdotal but since it’s based on personal experience, it’s very convincing --- to me. Based on the reactions of students at SUNY Plattsburgh from the 1990s to pretty much the present, I have observed the issue of privacy wax and wane but the underlying trend is that there has been a growing unconcern amongst our youth about privacy. They are neither happy nor unhappy about the assault on privacy from both the government and the corpocracy; they are merely apathetic.

But, to balance youth’s apathy, I think there’s a growing concern amongst the next generation --- the Millennials. I see more and more articles and books written by them that decry the loss of privacy. A specific example is the book by Julia Angwin, “Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance”. (Now there’s a title that almost eliminates the need to read the book!). A short version can be found in the article in the Opinion Pages of the New York Times March 3, 2014 edition, “Has Privacy Become a Luxury Good?” by Angwin. (http://www.nytimes.com/2014/03/04/opinion/has-privacy-become-a-luxury-good.html?hpw&rref=opinion&_r=1)

She begins the essay with a nice hook, “Last year, I spent more than $22oo and countless hours trying to protect my privacy.” Angwin goes on to describe how corporations and governments are invading her privacy as well as yours and mine: Google tailors its ads to content of the text in your emails. British Intelligence collected Yahoo video webcam chats of millions of users not even suspected of any illegal activities --- unsurprisingly, many were sexually explicit. Facebook allows/sells marketers access to your status updates unless you take steps to change the default from ‘Public’ to, say, ‘Friends’. Even seemingly innocuous news websites auction off your personal data before the page loads...the better to target their ads to you, my dear. And, if you’re still not convinced, just type, “creepy or useful” into your favorite search engine.

All of this is to say that it does appear that privacy is being taken more seriously by the general public and, no surprise, there is a level of secrecy practiced by those who would exploit our privacy. What’s the difference between “secrecy’ and “privacy”? The best example I’ve run across is this: “It’s no secret as to what we do when we go into a bathroom, but that doesn’t mean that we don’t want privacy.”

Search This Blog

Loading...