Observing Thinking

Observing Thinking
Observing Thinking

Sunday, December 21, 2014

High-Tech Performance

James Surowiecki writes the Business, Finance and Economics columns for the New Yorker magazine, so I was surprised to see an article in the Nov 10 issue entitled, “Better All The Time --- How the ‘performance revolution’ came to athletes --- and beyond.” What could an economist have to say about sports? Intrigued, I read on.

Like all good storytellers, rather than outline his thesis at the outset, Surowiecki begins with a real-life story which illustrates his thesis. He tells the tale of Kermit Washington, a power forward playing his third season in the NBA. Washington’s performance had been, ‘less than mediocre” in 1976 and while most every NBA player would never admit that they “still had something to learn’, he enlisted the help of Pete Newell, an assistant coach. Thanks to Newell and the grit of his pupil, Washington continued to improve to All-Star status. This did not go unnoticed by other players in the league who followed suit realizing finally that it was a fallacy to believe that, “what you are is what you are:” and there are always ways to improve your game.

This is where the technology comes in. The development of biometric sensors working with computer software to capture and analyze the data (e.g. heart rate, muscle activity, oxygen levels, etc. ) while an athlete trains has fostered a revolution in all sports. The computer program analyzing your heart rate as you accelerate, twist, turn and decelerate during your training sessions is able to recommend a training regimen tailored to maximize your performance. Even sleep patterns can be factored into the training. We are now able to answer questions on an individual athlete basis like, “Does training in extremely hot and humid weather increase or decrease performance on game day?”

Of course the economist in Surowiecki speculates that this trend toward high-tech training “reflects the fact that the monetary rewards for athletic success have become immense. ... It has become economically rational to invest a lot in player training.” While players of the past had jobs like insurance salesmen during the off season, now they can now afford to spend that time training. He goes on to describe examples of how training (or the lack thereof) has led to success (or failure) in such disparate fields as baseball, football, chess, classical music, manufacture of automobiles,, medicine,and education. Guess which field has shown the least improvement in the last 3 decades? Hint: it rhymes with “vacation”.

While it’s pretty obvious how performance can be boosted through higher tec training in the two sports mentioned above, it’s interesting to see how that applies to some of the other categories..Surowiecki gives evidence for the improved performance of modern day chess players: “In the 1970s, there were only two two chess players who had Elo ratings (a measure of skill level) higher than 2700. These days there are more than 30 such players.” ( Visit Wikipedia to see what Elo ratings are all about) He makes a similar claim for current classical musicians: “Pieces that were once considered too difficult for any but the very best musicians are now routinely played by conservatory students.” These gains he attributes to more effective training programs.

Personally, the most interesting enterprise investigated was Education (rhymes with “vacation” --- sort of.). Surowiecki claims that “Schools are, on the whole, little better than they were three decades ago --- test scores have barely budged...but he is quick to point out that, on average, poor kids get the lowest test scores across the board --- and “the US has more poor kids relative to other developed countries.” But that doesn’t completely explain our poor performance relative to countries like Canada, Japan and Finland; the key difference is that these countries spend a great deal of time and money training their teachers before they enter the classroom and this training continues throughout their careers.

Surowiecki ends the article with a succinct and potent observation: “...the way you improve the way you perform is to improve the way you train. High performance isn’t, ultimately, about running faster, throwing harder, or leaping farther. It’s about something much simpler: getting better at getting better.” To which I would add: The technology for collecting and analyzing the relevant data is a crucial piece of the solution.

Sunday, November 2, 2014


Back in the days when I was teaching Computer Science (which, if we were honest, really should be called “computer studies” --- but that’s another column) the method I used to deal with students who were talking (usually in the last row) was this: I would pause glancing at the offenders and say, “You know, I have this problem: I’m easily distracted. When that happens, I tend to lose my train of thought and it’s a bit of a struggle for me to get back on track. This usually means that I unconsciously make a negative association with the source of the distraction. Now since this negative feeling is embedded in my unconscious, when it comes time to assign final grades for this course and I’m looking at a student whose performance is a tossup between a B+ and an A- and this student is associated with the distraction then of course I’m going to be more likely to go with the B+ on a gut feeling without ever realizing the role of my unconscious in this decision. I just thought I’d like everyone to understand my problem so you could all factor that into your behaviour in class. Now, where was I....

This strategy worked very well and in fact I befriended a few students who appreciated not being called out in front of their classmates. I am telling you this because I have noticed of late much ado-doo on the topic of “distraction” flowing on the Internet lately. Examples range from the frivolous: Hugh Grant said in an interview that the Internet has completely destroyed his attention span. “I can barely get to the end of a tweet without getting bored now.” (http://www.theguardian.com/film/video/2014/oct/09/hugh-grant-the-rewrite-video-interview)

to the scholarly: A PEW Research report entitled, “The Six Types of Twitter Conversations” describing “The Six Structures of Communication Networks. describing a taxonomy for classifying communication networks such as the Internet.


Twitter, to my mind, symbolizes the quintessential distraction/addiction app. As I have mentioned in a previous column, when I questioned my (then) 13 year old granddaughter why she used Facebook but not Twitter, she said that she only used Facebook a few times in a day but, she noticed that her peers were on Twitter almost continuously and she couldn’t afford that much time for what seemed to her a frivolous activity. Unfortunately, two years later I find that she has a Twitter account just like me. I can justify my account as an “astute observer of technology but my granddaughter has different reasons. She told me that she got a Twitter account because most all of her friends have one and she found herself out of the “loop” regarding stories and info that her friends were sharing. She finds it easier to keep up with the backstory as a Twitter subscriber.

Since this is my beautiful, intelligent granddaughter, I am forced to admit that all who tweet are not necessarily twits. Douglas Coupland also believes some of the best writing in the English language today is being done on Twitter an in the one-star reviews on TripAdvisor. “They aren’t allowed to swear, so they have to be extremely inventive in their attack.” On the other hand he goes on, “I mean my attention span is gone. If anyone tells me that theirs hasn’t, I just assume they are lying. “


So, some good can come from sites like Twitter --- even as they are destroying the attention spans of some, they are providing an opportunity to practice writing under some heavily constrained circumstances (e.g. tweets are limited to 140 characters) not unlike the sonnet or haiku forms of poetry. Whether or not users should direct their efforts to more creative activities (such as writing this column), something more useful to society (like volunteering at a soup kitchen), or something that contributes to spiritual growth (like prayer, meditation or yoga) is another question.

Sunday, October 12, 2014

The Sorrows of Technology

Teachers often extol  “the joy of learning” and, while they certainly have every right to do so, they almost never mention the other side of the coin: “the pain of learning”. My guess is that many folks probably experienced the pain more often than the joy and most teachers the joy rather than the pain. I believe that the same analysis can be applied to Technology: while tech affords us much convenience and even joy, it also can be a source of great frustration and pain. I’m sure we all have our stories. Here is mine:

Recently my wife and I decided we needed a fax machine and, after a bit of research, purchased one at a Local Purveyor of Things Electronic. Turns out, in order to get a good price on the fax, you have to buy a whole package: Printer, Scanner, Copier, and Fax.

Before, I begin this sad story of pain, frustration and woe, let me just mention that I had also bought a new Panasonic phone system that had lots of bells and whistles including a built-in answering machine.  Somehow, during installation, the phone system decided that I had an unanswered voicemail at Charter, my phone service provider, and that I should enter my code to access it. Well, it’s an annoying message but I think I can live with it because I know that I told Charter long ago to cancel that service so I could use the phone’s built-in answering machine.

What I did not foresee was that my phones would keep flashing day and night helpfully reminding me to call my Charter voicemail. Three phones continually flashing is very nettling so I decide to call Panasonic to help remedy this  situation. After many frustrating minutes, I eventually reach a technical person who gives me the fix and it works --- until next morning when I discover all the phones busily flashing again. Well, I think, maybe there really is a message in my voicemail at Charter so I give them a call only to find that my mailbox has been disabled per my previous instructions: I have no mail because I have no mailbox. Also I learn during the course of the conversation that the best way to go is to disable my home phone’s answering machine and restore my voicemail at Charter and Oh, by the way I will also  need to apply for a “distinctive ring” thru Billing as a Tech Service guy can’t add “new” services all by themselves. A distinctive ring is a special ring for an incoming fax and will prevent the voicemail from intercepting it.

I tell Charter that I will think about their solution and will get back to them after I’ve tested the Fax machine to make sure it works.  So I call my son-in-law in Iowa and ask him to respond if he gets my Fax. That works OK but he recommends a new solution. He uses a website  called efax.com that provides a free fax service for receiving only ---  but sending costs. It even sounds simple to use: you go to their website to get a fake fax phone number which you give to anyone who wishes to send you a fax. Instead the fax goes to your account on their website where you can read it, download it, modify and print it and send it back to them over your own fax. After many more hours of hunting down the free version of efax (which includes a long online chat with representative Lindsey) I manage to set up the receive-only fax account.

This hybrid solution of receiving faxes via the Internet and sending them via my fax machine over my phone lines, as cumbersome as this bi-functional system sounds, seems the best way for me to go. I don’t need a second phone line and it only ties up the single line when I’m sending a fax at a time of my, not the caller’s, choosing. I rationalize it in the same way I use my wireless car door key/lock: When I exit the car I use the rocker switch on the armrest to lock the car but when I return I use the wireless on the key to open the door.

Sunday, August 10, 2014

Disruptive Technology

Just when I was getting sick and tired of hearing the word, “disruption” associated with internet technology, I stumbled across a New Yorker June 23 article “The Disruption Machine” by Jill Lepore The gist of the article is a summary, analysis and criticism of Clayton M. Christensen’s book, “The Innovator’s Dilemma” which makes the seemingly paradoxical claim that, “doing the right thing is the wrong thing”. I also learned that the term “innovation” was not always associated with the idea of “progress” (which is a good thing). Innovation used to be associated with novelty which made it seem somewhat frivolous. Even the father of our country, George Washington was reputed to have said, “Beware of innovation in politics”. So it becomes necessary to more precisely define what we mean by innovation and especially the new buzzword: Disruptive Innovation.

From Wikipedia we find these definitions:

“Innovation is finding a better way of doing something. Innovation can be viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.This is accomplished through more effective products, processes, services, technologies, or ideas that are readily available to markets, governments and society. The term innovation can be defined as something original and, as a consequence, new, that "breaks into" the market or society.

A disruptive innovation is an innovation that helps create a new market and value network, and eventually disrupts an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in a new market and later by lowering prices in the existing market.”

So what then is “disruptive technology””? This term is pretty much a synonym for “disruptive innovation” as described above; however, I would add that for a technology to be truly disruptive, not only would it adhere to that definition, it should sweep through society on a tidal wave of change.

For example, Air Conditioning changed society by massively increasing worker productivity which, in turn propagated prosperity throughout all levels of society. Atlanta, Georgia was a one-horse town before A/C; even Washington DC as late as the 1950s would dismiss all government workers when the temperature exceeded 100 degrees.( I remember laying awake, unable to sleep, one Halloween night in DC when the temp reached that mark.) Another common example is the Personal Computer. When the PC first appeared in the late 1970s, it was purchased only by hobbyists who could build them from kits much like the early automobiles were aimed at those who could put them together and repair them on their own. And, of course the PC allowed the middle-class consumer to connect to the Internet . Until that time, the Internet was a government-funded project using large mainframe computers used exclusively by scientists to share their research. While that’s still true, the Internet has expanded to provide infotainment (like just TV), business transactions such as shopping, banking and investing and social networks like Twitter,Tumblr and Facebook --- as well as planning and organizing political revolutions worldwide!

The other important feature of disruptive innovation or technology is that there must also be a “disruptee” --- the technology or enterprise that has been disrupted. For example, the PC disruptees were the industries that produced only midsize and large mainframe computer systems (IBM almost went out of business during that period.)

Another example is Henry Ford’s Model T automobile where the disruptee was not really the Horse and Carriage but the already existing automobiles that were too expensive for the average American. That and the fact that it changed the transportation market (e.g. train travel declined) as well as the social fabric (families no longer were so rooted to the place where they were born), the birth of suburbia and creation of all the attendant businesses --- not to mention the growth of nonrenewable fuels leading to climate change.

More examples of the two players in the Disruptive Technology game (which includes Wikipedia itself) can be found at:


Sunday, July 13, 2014

The Right to be Forgotten?

By now I would hope that most everyone knows that all US citizens have a right to privacy as described by the Fourth amendment to the Constitution, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” This has been interpreted by the Supreme Court as a guarantee that the government will respect your privacy. Even in the Age of the Internet, privacy (described by justice Brandeis as “the right to be let alone”) is still perceived as an important part of our concept of Liberty and Freedom; in fact, in an article by Leo Mirani


the single most important item cited was, “invasion of privacy”.

Wait, you may be thinking --- is that even possible? Can Google “forget” web links? The short answer is, “We shall see...” because, as Jeffrey Rosen writes in the Stanford Law Review, “At the end of January, the European Commissioner for Justice, Fundamental Rights, and Citizenship, Viviane Reding, announced the European Commission’s proposal to create a sweeping new privacy right—the “right to be forgotten.” The right, which has been hotly debated in Europe for the past few years, has finally been codified as part of a broad new proposed data protection regulation. “ (www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten?em_x=22).

The implications of this new ruling are potentially stupendous mainly because it directly affects two giants of the Internet: Google and Facebook. Rosen goes on, “ Although Reding depicted the new right as a modest expansion of existing data privacy rights, in fact it represents the biggest threat to free speech on the Internet in the coming decade. The right to be forgotten could make Facebook and Google, for example, liable for up to two percent of their global income if they fail to remove photos that people post about themselves and later regret, even if the photos have been widely distributed already. Unless the right is defined more precisely when it is promulgated over the next year or so, it could precipitate a dramatic clash between European and American conceptions of the proper balance between privacy and free speech, leading to a far less open Internet.”

As Reding indicates, from a European perspective this is no big deal; the principle of le droit à l’oubli or “the right to be forgotten” has been part of French law since 2010 (http://en.wikipedia.org/wiki/Right_to_be_forgotten). It is commonly used to expunge the publication of a criminal’s trial and conviction after he or she has served their time. The underlying assumption is that prison is a rehabilitation process and it is cruel to continue punishing someone who has paid their debt to society and just wants to get on with life. Back here in the US, however, there is this pesky First Amendment which not only guarantees citizens the rights of free expression, it is usually deemed the most important of our rights. This means that if I want to point out that my political opponent spent some time in the slammer when he was a juvenile, I am completely free to do so over the multitude of media that are available to me today. Why? Because the First Amendment allows me to do so.

But let us suppose that Google and Facebook follow he European demands in order to stay in business within the EU communities --- what happens when a story or video has already gone viral? How much time, effort and treasure must Google or Facebook expend chasing down and removing these millions of links? And how much time reinstating false positives or removed links that were actually OK as Google did with the Guardian recently? And what about “revenge porn” where the request for removal comes not from the perpetrator of a crime but the victim? But all these questions are just one small piece of the issues involving the complex relationships between privacy, free speech and anonymity. To learn more, just enter “Yelp lawsuits” or “the right to be forgotten + revenge porn” into your favorite search engine and prepare to be amazed.

Sunday, June 8, 2014

A Recipe for Disaster? Part 2

Last month I expressed my concerns with the current trend of recruiting young folks, especially women, to be trained to write programs so that they would have a useful trade with which to enter the World of Work. I mentioned that while I have nothing against teaching anybody the art and craft of programming (in fact, I believe it fosters creativity,confidence and perseverance) I do worry that these short training courses have the potential to do more harm than good. Why would I think such a thing?

Because, as the old saying goes, “A little learning can be a dangerous thing.” Writing a single short program to do a simple application is a creative, rewarding, and relatively straightforward process. But the creation of real-world software is no longer a simple proposition; it requires a well-oiled team of individuals writing code that works well together. While it is possible to teach team skills to novices, it is very very difficult (near impossible I think) to teach one how to write programs that are correct, efficient and reliable.

To be correct, the program must represent an accurate solution to a problem and this is the most difficult and important part of the problem-solving process. In many cases we can end up with a solution to an entirely different problem than the one we wished to solve or other unintended side effects. For example, suppose the problem is to keep food from spoiling, then one solution is to design and implement a refrigerator. I begin by identifying a list of requirements or specifications for the fridge; e.g. “The finished product should be capable of cooling food down from temperature X to,say, temperature X- 32 in Y minutes” and a host of others concerning power, efficiency, size, weight, etc. --- but no matter how hard I try, I cannot write a complete set of specs (if I could I would have to include unlikely ones such as: “ When the user opens the door the food should not fly out of the fridge”. The best I can hope for is the final implementation adheres to my set of incomplete specifications.

To be efficient, a programmer must be able to break a large complex problem down into smaller simpler pieces, and thus by solving each part, the whole problem gets solved. (I can’t eat an apple in one bite but many small bites will do the job.) But that is not enough for a well-designed computer program; for that we must add the requirement that each of the parts be modular.

By “modular” we mean that an executing module does not influence the execution of any of the hundreds (possibly thousands) of any of the other modules comprising the total system. This helps to insure that if we have a group of modules working together to perform a task, we don’t have to test their interactions because truly there will be none. (For example you expect your car to be a modular system --- when you change the oil, you don’t anticipate all four tires to go flat). There are many advantages to modular programs: different members of the programming team can work concurrently on separate modules resulting in a finished product sooner. Also, modules are easy to test, debug and modify (without unintended side effects).

To be reliable a program must comprise modules that correctly execute singly and in combination with all of the possible environments they run in. With most software this environment is provided by you, the user. Again, there is only a finite number of ways users can respond but that finite number is usually so large that it might as well be infinite. (For those with small children: how many times has your child completely frozen your computer as a result of randomly pressing keys?)

All of the above explanation is meant to show how difficult it is to write software. All comprehensive Computer Science Programs require one or more high-level courses in Software Engineering where the above issues are addressed. My concern is that, in the haste to provide well-paying jobs for young people, we may create many more problems than we can solve. And while job creation is certainly a good thing, we should keep in mind that the purpose of education is not just to make a living but to live a life.

Thursday, May 8, 2014

A Recipe for Disaster? Part 1

Of late, there has been much in the media about teaching young people, especially women, to code or, in less arcane language: to program or write software for computers, for example:

code.org http://www.ted.com/talks/mitch_resnick_let_s_teach_kids_to_code


The primary justification for this enterprise is, unsurprisingly, economic. The logic is simple and specious: computer technology is ubiquitous so there are lots of jobs related to computers, therefore we can help solve the current lack of jobs problem by scooping up our youth and training them to become programmers. I believe this approach has a strong potential for producing snafus even worse than the initial rollout of Obamacare. Now please don’t think that I believe that teaching kids to program is a complete waste of time. I strongly believe that programming is a rewarding and empowering activity that combines the logical thinking of the mathematician, the creativity of the poet, the pragmatism of the engineer and the patient stubbornness of the detective --- so it is definitely worth studying. But I do worry that we’re pushing this bandwagon for the wrong reasons.

When I first came to teach Computer Science at SUNY in 1978 I had the same idea that it would be useful to teach young people how to program but for entirely different reasons. I thought that learning to program would an excellent technique for teaching problem-solving methods to college freshman. I believed that programming is an empowering, creative activity that, like any good creative activity is immensely absorbing and satisfying.

Things went swimmingly for several years resulting in several papers that I delivered at regional, national and international conferences. But I was starting to have doubts. The question that kept nagging at me was: “Does programming develop general problem-solving skills or is it the other way round --- students who already have good problem-solving skills tend to be good at, and hence drawn to programming?” I could easily see that the good programmers in my classes also had good problem-solving skills but I wasn’t sure which one was the cause and which was the effect. Then, by chance I attended a talk by Edsger W. Dijkstra at Union College and posed my nagging question to him. I, like most of my colleagues, considered Dijkstra, one of the giants of Computer Science. Amongst his many accomplishments, he spearheaded the development of Structured Programming, a methodology that would allow a programmer to produce more reliable programs. He was a great believer in the idea that one should develop a logical plan for an algorithm (the essence of a program) before sitting down and doing the coding. In any case, his answer to my question was upsetting; based on his own experience, he believed that good programmers came to programming as already-formed, good problem solvers. If correct, the very foundation of my research for past several years was fatally flawed and I had perhaps done a great disservice to many of my students.

On further reflection, however, I realized that there was no hard evidence in either direction. The main problem is getting everyone to agree on precisely what they mean by “problem solving” and even if there is some overlap across different definitions there still remains the question: Is it even possible to teach general problem solving? A strong case can be made that while the problem-solving skills of an engineer and a computer programmer seem to be very similar, it is much harder to show the correspondence between a poet’s and an engineer’s problem-solving skills. And, as a formerly trained mathematician, it seems intuitively obvious to me that there is a great deal of overlap between a mathematician’s, physicist’s and poet’s problem-solving skills. At the same time, I noticed that every single one of the students who dropped the course gave me not only the reason for dropping it but hastened to add, “...but it really improved my appreciation for just how difficult it is to write software.” To which I would add, “Yes! --- and especially large software projects like operating systems and even spreadsheets. “ This is still true today: no matter what language you are coding in, it is extremely difficult to write clear, coherent, reliable and economical code. More next month.

Sunday, April 13, 2014

Privacy Concerns make a Comeback

From 1800 to the 1920s, the word “privacy” appeared in publications at a fairly constant but low frequency. Then the rate of citation increased somewhat between 1920 and 1960 followed by a very steep rise (except for mild dip in the early 1980s) through the year 2000.

I gleaned this information using the Google Ngram viewer at:


The way it works is explained nicely at: http://en.wikipedia.org/wiki/Google_Ngram_Viewer

Basically, the Google search engine explores its book data base for any word or phrase you enter and creates a graph which displays the relative frequency of that word in its huge books database over the time period you choose. According to Wikipedia:

“The word-search database was created by Google Labs, based originally on 5.2 million books, published between 1500 and 2008, containing 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. “

For example, language researchers have used Ngram to study trends in “mood” words (like exhilarated/apathetic, cheerful/depressed, etc.) and have evidence that American English has become more emotional in the last 50 years. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0059030

If you’d like to play with Ngram, you can also examine how the usage of the words: “kindergarten” and “nursery school” were replaced by “child care’” over the last half-century as well as many other examples at: https://books.google.com/ngrams/info

So, other than the Ngram data, what evidence do I have to make the claim, “Privacy makes a Comeback” ? Unfortunately, the other evidence is weak; it’s anecdotal but since it’s based on personal experience, it’s very convincing --- to me. Based on the reactions of students at SUNY Plattsburgh from the 1990s to pretty much the present, I have observed the issue of privacy wax and wane but the underlying trend is that there has been a growing unconcern amongst our youth about privacy. They are neither happy nor unhappy about the assault on privacy from both the government and the corpocracy; they are merely apathetic.

But, to balance youth’s apathy, I think there’s a growing concern amongst the next generation --- the Millennials. I see more and more articles and books written by them that decry the loss of privacy. A specific example is the book by Julia Angwin, “Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance”. (Now there’s a title that almost eliminates the need to read the book!). A short version can be found in the article in the Opinion Pages of the New York Times March 3, 2014 edition, “Has Privacy Become a Luxury Good?” by Angwin. (http://www.nytimes.com/2014/03/04/opinion/has-privacy-become-a-luxury-good.html?hpw&rref=opinion&_r=1)

She begins the essay with a nice hook, “Last year, I spent more than $22oo and countless hours trying to protect my privacy.” Angwin goes on to describe how corporations and governments are invading her privacy as well as yours and mine: Google tailors its ads to content of the text in your emails. British Intelligence collected Yahoo video webcam chats of millions of users not even suspected of any illegal activities --- unsurprisingly, many were sexually explicit. Facebook allows/sells marketers access to your status updates unless you take steps to change the default from ‘Public’ to, say, ‘Friends’. Even seemingly innocuous news websites auction off your personal data before the page loads...the better to target their ads to you, my dear. And, if you’re still not convinced, just type, “creepy or useful” into your favorite search engine.

All of this is to say that it does appear that privacy is being taken more seriously by the general public and, no surprise, there is a level of secrecy practiced by those who would exploit our privacy. What’s the difference between “secrecy’ and “privacy”? The best example I’ve run across is this: “It’s no secret as to what we do when we go into a bathroom, but that doesn’t mean that we don’t want privacy.”

Sunday, March 9, 2014

Net Neutrality: New Developments

At a recent meeting of the Fellows of the Institute for Ethics and Public Life at SUNY, I was gently chided by a member who pointed out that my last column discussing Net Neutrality missed an important part of the issue. It was pointed out to me that little users (like you and me) were at the mercy of the Internet Service Providers (ISPs) in more ways than I had discussed. For example, if an individual has a web site then, without Net Neutrality, they would be last in line to get their message out and worse, they could even get timed out and cut off during the process of a long slow download of information. But why should the little guy get the same consideration as the big boys? Because it was the aggregate of little guys who paid for the design and development of the Internet in the first place.

In the early 1990s, Senator Al Gore wrote, “How can government ensure that the nascent Internet will permit everyone to be able to compete with everyone else for the opportunity to provide any service to all willing customers? Next, how can we ensure that this new marketplace reaches the entire nation? And then how can we ensure that it fulfills the enormous promise of education, economic growth and job creation?”

While it is certainly true that Al Gore did not invent the Internet and never claimed that he did (as many detractors like to claim), he most certainly was the driving force behind its funding and eventual creation. Our present Internet evolved from the ARPAnet (funded by the Department of Defense and available only to the DOD and its many contractors). Gore envisioned that it should be made available to everyone as it was ultimately funded by us, the taxpayers.

When reading about the pros and cons of Net Neutrality, it’s useful to be aware of the following definitions:

End Users: People like you and me who log on to the Internet to work or play.

Backbone Networks: The companies, organizations and entities that operate big fiber networks that crisscross the world.

Broadband Providers: Companies that provide data services to homes, businesses and individuals, such as Verizon or Comcast.

Edge Providers: Providers of Internet services that include, well, just about every website and app maker on the planet. Google's YouTube, Amazon, and Apple's iTunes are all large edge providers. (Also called Content Providers)


And, for a quick refresher on the issue of Net Neutrality see: http://www.businessinsider.com/net-neutralityfor-dummies-and-how-it-effects-you-2014-1

In more recent developments, the FCC Chair, proposed that the agency, instead of redefining broadband carriers as a telecommunications service and thus under the regulation of the FCC in the same way the telephone carriers are, the agency would instead attempt to regulate anti-competitive behavior on a "case-by-case basis."


To some, this seems to be a dodgy attempt to avoid making the hard decisions necessary to insure net neutrality. This resolve will be tested by the recent Netflix deal with Comcast: Netflix is a “content provider”, the content in this case being movies and TV shows. Comcast is the largest Internet Service Provider (ISP) in the US.. And the deal is that Netflix has paid Comcast an undisclosed sum to insure that its content is delivered smoothly and expeditiously to its customers. To advocates, this is a violent violation of Net Neutrality especially since Comcast agreed to abide by it until 2018 in its acquisition of NBC Universal, another large media content provider.

To further complicate the situation, Comcast wants to acquire Time-Warner Cable. In other words, the second and first largest cable companies would merge into a corporation with unprecedented power over the most powerful media information network ever created. And if you’re still not overwhelmed, consider this: Google’s response is to provide very high speed optical fiber to selected cities in what appears to be an attempt to start a move up the ISP food chain. If Google succeeds, Comcast will have a formidable competitor. And more competition is a good thing for us consumers.

Sunday, February 9, 2014

Net Neutrality

You may have seen the recent cartoon on the Editorial page of the PR (01/23/14) depicting a tank in the process of demolishing a wall; the tank is labelled, “AT&T, VERIZON, COMCAST”, the wall is labelled, “NET NEUTRALITY” and the caption is, “So much for a free Internet...”. Your reaction might well have been “What the heck is Net Neutrality and what exactly is the problem?”.

Simply put, the concept of network neutrality is that: all users of the Internet should be treated fairly and equally --- this includes end users like you and me as well as giant Internet Service Providers (ISPs) like AT&T, VERIZON, and COMCAST. Until Jan 14,2014, it was assumed that the FCC (Federal Communications Commission) would regulate the ISPs in much the same way they regulate the phone company providers (e.g Verizon, AT&T, Sprint, etc). However, one of these providers (Verizon) contested this regulation in 2011 suggesting instead a tiered Internet service whereby a user could pay more to get better, faster Internet service. On the other side of the fence, “Neutrality proponents claim that telecommunications companies seek to impose a tiered service model in order to control the pipeline and thereby remove competition, create artificial scarcity, and oblige subscribers to buy their otherwise uncompetitive services.” (http://en.wikipedia.org/wiki/Net_neutrality). In fact, Comcast was accused of a violation of net neutrality in 2012 when it was discovered that it was favoring delivery of its own video streaming service over competitors such as Netflix and Hulu. Interestingly, both sides claim that their model promotes innovation which will stimulate the economy.

This issue has been working its way through the court system and now the DC Circuit court has ruled the FCC “cannot subject companies that provide Internet service to the same type of regulation that the agency imposes on phone companies...because Internet service was not a telecommunications service – like telephone or telegraph– but an information service, a classification that limits the F.C.C.’s authority.” ( “The Nuts and Bolts of Network Neutrality”, NYTimes.com, 01/14/2014 ). So, due to a fine legal distinction between a “utility’ and an “information service”, the FCC’s authority to regulate certain media has apparently been hamstrung.

The basic issue, as I see it,is how to balance authority and responsibility between private and public enterprise. If we use history as a guide, we see that the development of the railroad and the telegraph technologies in the US were a joint venture between government and private companies. Is this still a valid economic model for telecommunications? Should the flow of information be developed and regulated like the flow of electric power? If the answer is yes, then information, whether it’s delivered over a wire or through the air, seems to be a utility. and should be regulated as such.

While I can sympathize with the concept of a tiered service (I am used to paying more for better service on airlines and the like), I hope both sides can come to agree that information leads to knowledge, knowledge is power,, and similarly to electrical power, the flow of information should be regulated by a public utility.

Fortunately, the situation is not hopeless. There seem to be three options for untangling this mess. One would be for the US government to nationalize all telecommunications services

much like France and Germany did almost 20 years ago; it would then be the responsibility of an agency like the FCC to administer and regulate such service in a manner responsible to its citizens and not to corporations. Clearly, given the current political climate, the odds of this happening are very close to nil. Second would be for the FCC to appeal to a higher court for a better ruling. Third would be for the FCC to redefine Internet services as a public utility which would require them to be more active in their regulation. If the third alternative comes to pass, the Net Neutrality advocates will have won and the provider corporations will be looking for new and better ways to increase their services..

Thursday, January 2, 2014

Technology and Mischief 01/12/2014

In the old days (pre-1980), if you invested in a quality camera like a single lens reflex, the only things you bought after that were film and various accessories like camera bags and additional lenses. And, unless you dropped it from a moving car, you didn’t buy another camera for the rest of your life. Today many of us own several digital cameras, lured by the astonishing progress in the technical specifications: more megapixels which generally means higher resolution photos, more sensitive light sensors which means clearer, crisper pictures, built in telephoto lenses (up to 60x at this writing), shorter lag times between shots and smaller in size and weight --- not to mention less and less expensive giving us more bang for the buck.

Also changed is the way we take our pictures. When we had to carry rolls of fairly expensive film to record our adventures, we very carefully took one, two or at most three shots of a scene in the hope that one would turn out well. After a trip to the drugstore who sent them off to a photo lab, we waited impatiently for two weeks to get our prints and slides and negatives back before embarking on the last stage of sticking them into a photo album or carousel or shoebox to be retrieved once or twice per year at various family gatherings. Nowadays,with digital cameras that take multiple pics per second I can take 10 to 15 snaps and be quite certain one of them will be good --- blissfully unaware of all the time I will spend later on my computer winnowing them down to the one or two best shots. After that arduous process I can upload them to an online photo service and post them on their or a multitude of other free websites inviting whomever I wish to view them. If I feel a bit old school, I have prints or a photobook made.

So which is better: the old or the new photographic experience? As Tevye says in “Fiddler on the Roof” regarding the question, “Why do we have traditions”: “I’ll tell you. I don’t know.” But I do love the fact that my photo editor allows me to enhance my photos. I can crop, lighten, darken the contrast or shadows, straighten the image if it’s off kilter,retouch, take out redeye as well as apply several dozen colorizing “effects”. It also allows me to make albums and sort them into order by date taken, name, or size. It’s truly amazing how much time I can spend doing this. On the down side, photo editors can be used mischievously to alter reality.

Today technology is used by kids and others with childish minds to make mischief --- from hard-core cyber-bullying and phishing scams to trolling. A troll is a trouble-maker who joins a web discussion whose only aim is to destroy or disrupt the comity of the conversation. The standard modus operandi is to make a controversial statement that is sure to polarize the members --- usually something bordering on sexist or racist. Then the troll lights up a cigar, sits back and watches, only joining in with responses that will fan the flames. Your first thought might be, “This guy needs to get a life”, but sadly, it is a way of life to trolls.

When I was a kid (in a time long ago and far away) we used technology to perform mischief also.. Of course, the technology was rather primitive --- it was called a telephone. Me and a buddy would call a local store and ask, “Do you have Prince Albert in a can?” (the brand name of a pipe tobacco that came in a small can). When the proprietor answered in the affirmative, we’d respond, “Well. let him out --- he doesn’t like it in there!” It was hilarious at the time. You had to be there.

All these examples are just to say that it may not be technology alone that is the root cause of mischief, it’s a human flaw that most of us outgrow. But it sure does enhance the quality and quantity of mischief that can be done.

Search This Blog