Sunday, August 10, 2014

Disruptive Technology








Just when I was getting sick and tired of hearing the word, “disruption” associated with internet technology, I stumbled across a New Yorker June 23 article “The Disruption Machine” by Jill Lepore The gist of the article is a summary, analysis and criticism of Clayton M. Christensen’s book, “The Innovator’s Dilemma” which makes the seemingly paradoxical claim that, “doing the right thing is the wrong thing”. I also learned that the term “innovation” was not always associated with the idea of “progress” (which is a good thing). Innovation used to be associated with novelty which made it seem somewhat frivolous. Even the father of our country, George Washington was reputed to have said, “Beware of innovation in politics”. So it becomes necessary to more precisely define what we mean by innovation and especially the new buzzword: Disruptive Innovation.


From Wikipedia we find these definitions:


“Innovation is finding a better way of doing something. Innovation can be viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.This is accomplished through more effective products, processes, services, technologies, or ideas that are readily available to markets, governments and society. The term innovation can be defined as something original and, as a consequence, new, that "breaks into" the market or society.


A disruptive innovation is an innovation that helps create a new market and value network, and eventually disrupts an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in a new market and later by lowering prices in the existing market.”


So what then is “disruptive technology””? This term is pretty much a synonym for “disruptive innovation” as described above; however, I would add that for a technology to be truly disruptive, not only would it adhere to that definition, it should sweep through society on a tidal wave of change.


For example, Air Conditioning changed society by massively increasing worker productivity which, in turn propagated prosperity throughout all levels of society. Atlanta, Georgia was a one-horse town before A/C; even Washington DC as late as the 1950s would dismiss all government workers when the temperature exceeded 100 degrees.( I remember laying awake, unable to sleep, one Halloween night in DC when the temp reached that mark.) Another common example is the Personal Computer. When the PC first appeared in the late 1970s, it was purchased only by hobbyists who could build them from kits much like the early automobiles were aimed at those who could put them together and repair them on their own. And, of course the PC allowed the middle-class consumer to connect to the Internet . Until that time, the Internet was a government-funded project using large mainframe computers used exclusively by scientists to share their research. While that’s still true, the Internet has expanded to provide infotainment (like just TV), business transactions such as shopping, banking and investing and social networks like Twitter,Tumblr and Facebook --- as well as planning and organizing political revolutions worldwide!


The other important feature of disruptive innovation or technology is that there must also be a “disruptee” --- the technology or enterprise that has been disrupted. For example, the PC disruptees were the industries that produced only midsize and large mainframe computer systems (IBM almost went out of business during that period.)


Another example is Henry Ford’s Model T automobile where the disruptee was not really the Horse and Carriage but the already existing automobiles that were too expensive for the average American. That and the fact that it changed the transportation market (e.g. train travel declined) as well as the social fabric (families no longer were so rooted to the place where they were born), the birth of suburbia and creation of all the attendant businesses --- not to mention the growth of nonrenewable fuels leading to climate change.


More examples of the two players in the Disruptive Technology game (which includes Wikipedia itself) can be found at:

http://en.wikipedia.org/wiki/Disruptive_innovation#Practical_example_of_disruption

Sunday, July 13, 2014

The Right to be Forgotten?



By now I would hope that most everyone knows that all US citizens have a right to privacy as described by the Fourth amendment to the Constitution, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” This has been interpreted by the Supreme Court as a guarantee that the government will respect your privacy. Even in the Age of the Internet, privacy (described by justice Brandeis as “the right to be let alone”) is still perceived as an important part of our concept of Liberty and Freedom; in fact, in an article by Leo Mirani

(qz.com/228121/these-are-the-things-people-want-google-to-forget-about-them/)

the single most important item cited was, “invasion of privacy”.

Wait, you may be thinking --- is that even possible? Can Google “forget” web links? The short answer is, “We shall see...” because, as Jeffrey Rosen writes in the Stanford Law Review, “At the end of January, the European Commissioner for Justice, Fundamental Rights, and Citizenship, Viviane Reding, announced the European Commission’s proposal to create a sweeping new privacy right—the “right to be forgotten.” The right, which has been hotly debated in Europe for the past few years, has finally been codified as part of a broad new proposed data protection regulation. “ (www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten?em_x=22).

The implications of this new ruling are potentially stupendous mainly because it directly affects two giants of the Internet: Google and Facebook. Rosen goes on, “ Although Reding depicted the new right as a modest expansion of existing data privacy rights, in fact it represents the biggest threat to free speech on the Internet in the coming decade. The right to be forgotten could make Facebook and Google, for example, liable for up to two percent of their global income if they fail to remove photos that people post about themselves and later regret, even if the photos have been widely distributed already. Unless the right is defined more precisely when it is promulgated over the next year or so, it could precipitate a dramatic clash between European and American conceptions of the proper balance between privacy and free speech, leading to a far less open Internet.”

As Reding indicates, from a European perspective this is no big deal; the principle of le droit à l’oubli or “the right to be forgotten” has been part of French law since 2010 (http://en.wikipedia.org/wiki/Right_to_be_forgotten). It is commonly used to expunge the publication of a criminal’s trial and conviction after he or she has served their time. The underlying assumption is that prison is a rehabilitation process and it is cruel to continue punishing someone who has paid their debt to society and just wants to get on with life. Back here in the US, however, there is this pesky First Amendment which not only guarantees citizens the rights of free expression, it is usually deemed the most important of our rights. This means that if I want to point out that my political opponent spent some time in the slammer when he was a juvenile, I am completely free to do so over the multitude of media that are available to me today. Why? Because the First Amendment allows me to do so.

But let us suppose that Google and Facebook follow he European demands in order to stay in business within the EU communities --- what happens when a story or video has already gone viral? How much time, effort and treasure must Google or Facebook expend chasing down and removing these millions of links? And how much time reinstating false positives or removed links that were actually OK as Google did with the Guardian recently? And what about “revenge porn” where the request for removal comes not from the perpetrator of a crime but the victim? But all these questions are just one small piece of the issues involving the complex relationships between privacy, free speech and anonymity. To learn more, just enter “Yelp lawsuits” or “the right to be forgotten + revenge porn” into your favorite search engine and prepare to be amazed.

Sunday, June 8, 2014

A Recipe for Disaster? Part 2


Last month I expressed my concerns with the current trend of recruiting young folks, especially women, to be trained to write programs so that they would have a useful trade with which to enter the World of Work. I mentioned that while I have nothing against teaching anybody the art and craft of programming (in fact, I believe it fosters creativity,confidence and perseverance) I do worry that these short training courses have the potential to do more harm than good. Why would I think such a thing?

Because, as the old saying goes, “A little learning can be a dangerous thing.” Writing a single short program to do a simple application is a creative, rewarding, and relatively straightforward process. But the creation of real-world software is no longer a simple proposition; it requires a well-oiled team of individuals writing code that works well together. While it is possible to teach team skills to novices, it is very very difficult (near impossible I think) to teach one how to write programs that are correct, efficient and reliable.

To be correct, the program must represent an accurate solution to a problem and this is the most difficult and important part of the problem-solving process. In many cases we can end up with a solution to an entirely different problem than the one we wished to solve or other unintended side effects. For example, suppose the problem is to keep food from spoiling, then one solution is to design and implement a refrigerator. I begin by identifying a list of requirements or specifications for the fridge; e.g. “The finished product should be capable of cooling food down from temperature X to,say, temperature X- 32 in Y minutes” and a host of others concerning power, efficiency, size, weight, etc. --- but no matter how hard I try, I cannot write a complete set of specs (if I could I would have to include unlikely ones such as: “ When the user opens the door the food should not fly out of the fridge”. The best I can hope for is the final implementation adheres to my set of incomplete specifications.

To be efficient, a programmer must be able to break a large complex problem down into smaller simpler pieces, and thus by solving each part, the whole problem gets solved. (I can’t eat an apple in one bite but many small bites will do the job.) But that is not enough for a well-designed computer program; for that we must add the requirement that each of the parts be modular.

By “modular” we mean that an executing module does not influence the execution of any of the hundreds (possibly thousands) of any of the other modules comprising the total system. This helps to insure that if we have a group of modules working together to perform a task, we don’t have to test their interactions because truly there will be none. (For example you expect your car to be a modular system --- when you change the oil, you don’t anticipate all four tires to go flat). There are many advantages to modular programs: different members of the programming team can work concurrently on separate modules resulting in a finished product sooner. Also, modules are easy to test, debug and modify (without unintended side effects).

To be reliable a program must comprise modules that correctly execute singly and in combination with all of the possible environments they run in. With most software this environment is provided by you, the user. Again, there is only a finite number of ways users can respond but that finite number is usually so large that it might as well be infinite. (For those with small children: how many times has your child completely frozen your computer as a result of randomly pressing keys?)

All of the above explanation is meant to show how difficult it is to write software. All comprehensive Computer Science Programs require one or more high-level courses in Software Engineering where the above issues are addressed. My concern is that, in the haste to provide well-paying jobs for young people, we may create many more problems than we can solve. And while job creation is certainly a good thing, we should keep in mind that the purpose of education is not just to make a living but to live a life.

Thursday, May 8, 2014

A Recipe for Disaster? Part 1








Of late, there has been much in the media about teaching young people, especially women, to code or, in less arcane language: to program or write software for computers, for example:

code.org http://www.ted.com/talks/mitch_resnick_let_s_teach_kids_to_code

www.wbez.org/.../tackling-tech-gender-gap-teaching-girls-code-1.



The primary justification for this enterprise is, unsurprisingly, economic. The logic is simple and specious: computer technology is ubiquitous so there are lots of jobs related to computers, therefore we can help solve the current lack of jobs problem by scooping up our youth and training them to become programmers. I believe this approach has a strong potential for producing snafus even worse than the initial rollout of Obamacare. Now please don’t think that I believe that teaching kids to program is a complete waste of time. I strongly believe that programming is a rewarding and empowering activity that combines the logical thinking of the mathematician, the creativity of the poet, the pragmatism of the engineer and the patient stubbornness of the detective --- so it is definitely worth studying. But I do worry that we’re pushing this bandwagon for the wrong reasons.


When I first came to teach Computer Science at SUNY in 1978 I had the same idea that it would be useful to teach young people how to program but for entirely different reasons. I thought that learning to program would an excellent technique for teaching problem-solving methods to college freshman. I believed that programming is an empowering, creative activity that, like any good creative activity is immensely absorbing and satisfying.


Things went swimmingly for several years resulting in several papers that I delivered at regional, national and international conferences. But I was starting to have doubts. The question that kept nagging at me was: “Does programming develop general problem-solving skills or is it the other way round --- students who already have good problem-solving skills tend to be good at, and hence drawn to programming?” I could easily see that the good programmers in my classes also had good problem-solving skills but I wasn’t sure which one was the cause and which was the effect. Then, by chance I attended a talk by Edsger W. Dijkstra at Union College and posed my nagging question to him. I, like most of my colleagues, considered Dijkstra, one of the giants of Computer Science. Amongst his many accomplishments, he spearheaded the development of Structured Programming, a methodology that would allow a programmer to produce more reliable programs. He was a great believer in the idea that one should develop a logical plan for an algorithm (the essence of a program) before sitting down and doing the coding. In any case, his answer to my question was upsetting; based on his own experience, he believed that good programmers came to programming as already-formed, good problem solvers. If correct, the very foundation of my research for past several years was fatally flawed and I had perhaps done a great disservice to many of my students.


On further reflection, however, I realized that there was no hard evidence in either direction. The main problem is getting everyone to agree on precisely what they mean by “problem solving” and even if there is some overlap across different definitions there still remains the question: Is it even possible to teach general problem solving? A strong case can be made that while the problem-solving skills of an engineer and a computer programmer seem to be very similar, it is much harder to show the correspondence between a poet’s and an engineer’s problem-solving skills. And, as a formerly trained mathematician, it seems intuitively obvious to me that there is a great deal of overlap between a mathematician’s, physicist’s and poet’s problem-solving skills. At the same time, I noticed that every single one of the students who dropped the course gave me not only the reason for dropping it but hastened to add, “...but it really improved my appreciation for just how difficult it is to write software.” To which I would add, “Yes! --- and especially large software projects like operating systems and even spreadsheets. “ This is still true today: no matter what language you are coding in, it is extremely difficult to write clear, coherent, reliable and economical code. More next month.

Sunday, April 13, 2014

Privacy Concerns make a Comeback








From 1800 to the 1920s, the word “privacy” appeared in publications at a fairly constant but low frequency. Then the rate of citation increased somewhat between 1920 and 1960 followed by a very steep rise (except for mild dip in the early 1980s) through the year 2000.

I gleaned this information using the Google Ngram viewer at:

https://books.google.com/ngrams/graph?content=privacy&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cprivacy%3B%2Cc0

The way it works is explained nicely at: http://en.wikipedia.org/wiki/Google_Ngram_Viewer

Basically, the Google search engine explores its book data base for any word or phrase you enter and creates a graph which displays the relative frequency of that word in its huge books database over the time period you choose. According to Wikipedia:

“The word-search database was created by Google Labs, based originally on 5.2 million books, published between 1500 and 2008, containing 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. “

For example, language researchers have used Ngram to study trends in “mood” words (like exhilarated/apathetic, cheerful/depressed, etc.) and have evidence that American English has become more emotional in the last 50 years. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0059030

If you’d like to play with Ngram, you can also examine how the usage of the words: “kindergarten” and “nursery school” were replaced by “child care’” over the last half-century as well as many other examples at: https://books.google.com/ngrams/info

So, other than the Ngram data, what evidence do I have to make the claim, “Privacy makes a Comeback” ? Unfortunately, the other evidence is weak; it’s anecdotal but since it’s based on personal experience, it’s very convincing --- to me. Based on the reactions of students at SUNY Plattsburgh from the 1990s to pretty much the present, I have observed the issue of privacy wax and wane but the underlying trend is that there has been a growing unconcern amongst our youth about privacy. They are neither happy nor unhappy about the assault on privacy from both the government and the corpocracy; they are merely apathetic.

But, to balance youth’s apathy, I think there’s a growing concern amongst the next generation --- the Millennials. I see more and more articles and books written by them that decry the loss of privacy. A specific example is the book by Julia Angwin, “Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance”. (Now there’s a title that almost eliminates the need to read the book!). A short version can be found in the article in the Opinion Pages of the New York Times March 3, 2014 edition, “Has Privacy Become a Luxury Good?” by Angwin. (http://www.nytimes.com/2014/03/04/opinion/has-privacy-become-a-luxury-good.html?hpw&rref=opinion&_r=1)

She begins the essay with a nice hook, “Last year, I spent more than $22oo and countless hours trying to protect my privacy.” Angwin goes on to describe how corporations and governments are invading her privacy as well as yours and mine: Google tailors its ads to content of the text in your emails. British Intelligence collected Yahoo video webcam chats of millions of users not even suspected of any illegal activities --- unsurprisingly, many were sexually explicit. Facebook allows/sells marketers access to your status updates unless you take steps to change the default from ‘Public’ to, say, ‘Friends’. Even seemingly innocuous news websites auction off your personal data before the page loads...the better to target their ads to you, my dear. And, if you’re still not convinced, just type, “creepy or useful” into your favorite search engine.

All of this is to say that it does appear that privacy is being taken more seriously by the general public and, no surprise, there is a level of secrecy practiced by those who would exploit our privacy. What’s the difference between “secrecy’ and “privacy”? The best example I’ve run across is this: “It’s no secret as to what we do when we go into a bathroom, but that doesn’t mean that we don’t want privacy.”

Sunday, March 9, 2014

Net Neutrality: New Developments



At a recent meeting of the Fellows of the Institute for Ethics and Public Life at SUNY, I was gently chided by a member who pointed out that my last column discussing Net Neutrality missed an important part of the issue. It was pointed out to me that little users (like you and me) were at the mercy of the Internet Service Providers (ISPs) in more ways than I had discussed. For example, if an individual has a web site then, without Net Neutrality, they would be last in line to get their message out and worse, they could even get timed out and cut off during the process of a long slow download of information. But why should the little guy get the same consideration as the big boys? Because it was the aggregate of little guys who paid for the design and development of the Internet in the first place.

In the early 1990s, Senator Al Gore wrote, “How can government ensure that the nascent Internet will permit everyone to be able to compete with everyone else for the opportunity to provide any service to all willing customers? Next, how can we ensure that this new marketplace reaches the entire nation? And then how can we ensure that it fulfills the enormous promise of education, economic growth and job creation?”

While it is certainly true that Al Gore did not invent the Internet and never claimed that he did (as many detractors like to claim), he most certainly was the driving force behind its funding and eventual creation. Our present Internet evolved from the ARPAnet (funded by the Department of Defense and available only to the DOD and its many contractors). Gore envisioned that it should be made available to everyone as it was ultimately funded by us, the taxpayers.

When reading about the pros and cons of Net Neutrality, it’s useful to be aware of the following definitions:

End Users: People like you and me who log on to the Internet to work or play.

Backbone Networks: The companies, organizations and entities that operate big fiber networks that crisscross the world.

Broadband Providers: Companies that provide data services to homes, businesses and individuals, such as Verizon or Comcast.

Edge Providers: Providers of Internet services that include, well, just about every website and app maker on the planet. Google's YouTube, Amazon, and Apple's iTunes are all large edge providers. (Also called Content Providers)

http://readwrite.com/2014/01/15/net-neutrality-fcc-verizon-open-internet-order#awesm=~oxoBAJBphB88GQ

And, for a quick refresher on the issue of Net Neutrality see: http://www.businessinsider.com/net-neutralityfor-dummies-and-how-it-effects-you-2014-1




In more recent developments, the FCC Chair, proposed that the agency, instead of redefining broadband carriers as a telecommunications service and thus under the regulation of the FCC in the same way the telephone carriers are, the agency would instead attempt to regulate anti-competitive behavior on a "case-by-case basis."

http://arstechnica.com/tech-policy/2014/02/fcc-wont-appeal-verizon-ruling-will-regulate-net-on-case-by-case-basis/




To some, this seems to be a dodgy attempt to avoid making the hard decisions necessary to insure net neutrality. This resolve will be tested by the recent Netflix deal with Comcast: Netflix is a “content provider”, the content in this case being movies and TV shows. Comcast is the largest Internet Service Provider (ISP) in the US.. And the deal is that Netflix has paid Comcast an undisclosed sum to insure that its content is delivered smoothly and expeditiously to its customers. To advocates, this is a violent violation of Net Neutrality especially since Comcast agreed to abide by it until 2018 in its acquisition of NBC Universal, another large media content provider.




To further complicate the situation, Comcast wants to acquire Time-Warner Cable. In other words, the second and first largest cable companies would merge into a corporation with unprecedented power over the most powerful media information network ever created. And if you’re still not overwhelmed, consider this: Google’s response is to provide very high speed optical fiber to selected cities in what appears to be an attempt to start a move up the ISP food chain. If Google succeeds, Comcast will have a formidable competitor. And more competition is a good thing for us consumers.

Sunday, February 9, 2014

Net Neutrality


You may have seen the recent cartoon on the Editorial page of the PR (01/23/14) depicting a tank in the process of demolishing a wall; the tank is labelled, “AT&T, VERIZON, COMCAST”, the wall is labelled, “NET NEUTRALITY” and the caption is, “So much for a free Internet...”. Your reaction might well have been “What the heck is Net Neutrality and what exactly is the problem?”.

Simply put, the concept of network neutrality is that: all users of the Internet should be treated fairly and equally --- this includes end users like you and me as well as giant Internet Service Providers (ISPs) like AT&T, VERIZON, and COMCAST. Until Jan 14,2014, it was assumed that the FCC (Federal Communications Commission) would regulate the ISPs in much the same way they regulate the phone company providers (e.g Verizon, AT&T, Sprint, etc). However, one of these providers (Verizon) contested this regulation in 2011 suggesting instead a tiered Internet service whereby a user could pay more to get better, faster Internet service. On the other side of the fence, “Neutrality proponents claim that telecommunications companies seek to impose a tiered service model in order to control the pipeline and thereby remove competition, create artificial scarcity, and oblige subscribers to buy their otherwise uncompetitive services.” (http://en.wikipedia.org/wiki/Net_neutrality). In fact, Comcast was accused of a violation of net neutrality in 2012 when it was discovered that it was favoring delivery of its own video streaming service over competitors such as Netflix and Hulu. Interestingly, both sides claim that their model promotes innovation which will stimulate the economy.

This issue has been working its way through the court system and now the DC Circuit court has ruled the FCC “cannot subject companies that provide Internet service to the same type of regulation that the agency imposes on phone companies...because Internet service was not a telecommunications service – like telephone or telegraph– but an information service, a classification that limits the F.C.C.’s authority.” ( “The Nuts and Bolts of Network Neutrality”, NYTimes.com, 01/14/2014 ). So, due to a fine legal distinction between a “utility’ and an “information service”, the FCC’s authority to regulate certain media has apparently been hamstrung.

The basic issue, as I see it,is how to balance authority and responsibility between private and public enterprise. If we use history as a guide, we see that the development of the railroad and the telegraph technologies in the US were a joint venture between government and private companies. Is this still a valid economic model for telecommunications? Should the flow of information be developed and regulated like the flow of electric power? If the answer is yes, then information, whether it’s delivered over a wire or through the air, seems to be a utility. and should be regulated as such.

While I can sympathize with the concept of a tiered service (I am used to paying more for better service on airlines and the like), I hope both sides can come to agree that information leads to knowledge, knowledge is power,, and similarly to electrical power, the flow of information should be regulated by a public utility.

Fortunately, the situation is not hopeless. There seem to be three options for untangling this mess. One would be for the US government to nationalize all telecommunications services

much like France and Germany did almost 20 years ago; it would then be the responsibility of an agency like the FCC to administer and regulate such service in a manner responsible to its citizens and not to corporations. Clearly, given the current political climate, the odds of this happening are very close to nil. Second would be for the FCC to appeal to a higher court for a better ruling. Third would be for the FCC to redefine Internet services as a public utility which would require them to be more active in their regulation. If the third alternative comes to pass, the Net Neutrality advocates will have won and the provider corporations will be looking for new and better ways to increase their services..



Search This Blog

Loading...