Categories
Product Management Technology

Man Computer Symbiosis

Earlier this year I was working on our online banking platform and kept thinking about the question, “Will we need people in the finance function in the future or will it all be done by computers?”

I’ve come to the conclusion that people will be around for a long time. Humans and computers can do a lot more together then they can alone. J. C. R, Licklider (the founder of the internet) discussed this concept a long time ago in a paper called Man-Computer Symbiosis. Essentially machines do the grunt work, allowing humans to focus on things that are more important. Today humans work together alongside computers almost constantly. Think about driving to dinner by using the computerized maps and GPS on your phone. Or making a call on that phone (another computer). Or even driving the car that is stuffed with tiny computers to help with steering and measure your tire pressure.

I found a wonderful example of Man-Computer Symbiosis from Garry Kasparov — one of the best chess players ever. He gave a lecture on how humans and computers can partner together when playing chess. I’ll summarize the key points below or you can also view a great piece that Kasparov wrote in the New York Review of Books or watch a video of Kasparov’s lecture.

  • The End of Human/Computer Chess? In 1997 the IBM computer Deep Blue beat the world chess champion Garry Kasparov. This was the first time that the best computer in the world beat the best human in the world. Most of the world considered this the end of human/computer chess. Computers would continue to get better each year much faster than people — leaving human players in the dust.
  • But A New Type of Competition Emerged: The website Playchess.com held a “Freestyle” competition in 2005. People could compete in teams and use computers. Traditionally the use of computers by human players would be considered cheating. There was substantial prize money offered which enticed many of the world’s greatest grandmasters and IBM’s newest supercomputer “Hydra” to enter.
  • A Surprise Winner: As it turns out, grandmasters with laptops could easily beat Hydra and the other supercomputers. But the overall winner was a pair of amateur players with 3 laptops. These were neither the best players, nor the best machines, but they had the best process. As Kasparov writes, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”

Another example is the company Palantir — a software startup that helps “good guys” (e.g., governments, banks) catch “bad guys” (e.g., terrorists, fraudsters). Most people attack this problem from the perspective of “How can we get computers to find the bad guys?” Palantir takes man-computer symbiosis point of view by providing a tool that makes the good guys much better at their job.

Considering how pervasive computers are to the very fabric of our lives, thinking though the model of Man-Computer Symbiosis is critical to both building the best machines and also deploying and training people most effectively.

Categories
Technology

The Best Technology Articles Ever Written

I’ve always enjoyed first person accounts of the beginning of the computer age. What was it like to be there? How did people view new technologies before they became part of our everyday lives? I’ve put together a list of some of my favorite magazine articles that capture that feeling. My previous blog post on The First Computer Interface captures that sentiment and here are 5 more. Most of the articles are on Kevin Kelly’s list of Best Magazine Articles Ever (with the exception of  Inside The Deal That Made Bill Gates $350,000,000).  Here are five articles about the beginning of…

  • Silicon Valley (The Tinkerings of Robert Noyce by Tom Wolfe, Esquire Magazine, December 1983) Robert Noyce founded two of the most important startups in Silicon Valley — Intel and its predecessor Fairchild Semiconductor. Tom Wolfe (yes, that Tom Wolfe) wrote about Noyce exporting the Midwestern Congregationalist ethic to create the modern culture of Silicon Valley. Noyce believed in a strict meritocracy. Wolfe writes “Noyce’s idea was that every employee should feel that he could go as far and as fast in this industry as his talent would take him…. When they first moved into the building, Noyce worked at an old, scratched, secondhand metal desk. As the company expanded, Noyce kept the same desk, and new stenographers, just hired, were given desks that were not only newer but bigger and better than his.” At the same time that Noyce was founding Silicon Valley, another set of small town Midwesterners were sending men into space. After the success of the Apollo 11 mission, NASA’s administrator, Tom Paine, happened to remark in conversation: “This was the triumph of the squares.” This may have been the first reference to geeks conquering the earth (and space).
  • Hacking (Secrets of the Little Blue Box by Ron Rosenbaum, Esquire, 10/1971) The original hackers were called “phone phreaks.” These were kids who figured out a weakness in the AT&T telephone system that they could exploit. By putting a 2600 hertz tone to their mouthpiece, they could trick the phone company into giving them free calls. The most famous of the phone phreaks was John Draper (aka Captain Crunch) who discovered that a whistle given away in the children’s cereal gave off the magic tone. He also taught Steve Jobs and Steve Wozniak how to phone phreak. The phone hackers exemplified the original hacker ethic — to explore a giant system to see how  it worked. Of course, like modern hackers, some got a little carried away by the exploration. By the end of the article Rosenbaum writes a little bit about many of the phone phreaks started getting into computer hacking — which was quite a feat in 1971. There was a great documentary on the history of hacking from Captain Crunch to Steve Wozniak to Kevin Mitnick that does a great “where are they now” of hacking.
  • Video Games (Spacewar by Stewart Brand, Rolling Stone, 11/7/1972) Stewart Brand wrote a fantastic piece on Spacewar — the world’s first video game. Spacewar was written before anyone had  thought about putting graphics on a computer. Its hardware didn’t even have a multiply or divide function. Brand talks about the computer geeks at Stanford and MIT who were writing the first computer programs meant to be used by other people (as opposed to writing programs to solve a specific numeric problem.) One of the most entertaining program names was a word processing system called “Expensive Typewriter.” At the time, the intranet only had 20 computers but people were starting to understand that if it took hold, this would be the transformation of the news and recording industries. As a side note, there is computer code at the end of the article — probably the only time code was ever published in Rolling Stone magazine.
  • Microsoft (Inside The Deal That Made Bill Gates $350,000,000, Bro Uttal, Fortune, 7/21/1984) You don’t hear much about Bill Gates these days — a man who seems focused on his privacy. The Guardian published an interview with Gates this summer where the most interesting tidbit was that his children liked to tease him by singing the song Billionaire by Bruno Mars. But Microsoft was a very different company in 1984, when a 30 year old Bill Gates invited Fortune Magazine to spend five months with him while they went through their IPO. This is one of the few journalistic tales of an IPO ever written. The editor’s note reads “I doubt that a story like this has been published before or is likely to be done again.” It’s amazing to see an early Microsoft where Bill Gates used part of the $1.6 million cash he made on the offering to pay off a $150,000 mortgage. He also decided to keep the stock’s initial IPO value below $500MM which he felt was uncomfortably high. But the most interesting insight that Uttal has into the young Gates is that he was “something of a ladies’ man and a fiendishly fast driver who has racked up speeding tickets even in the sluggish Mercedes diesel he bought to restrain himself.” 
  • Blogging: (You’ve Got Blog, Rebecca Mead, The New Yorker, 11/13/2000) When I first read this article in 2000, I was introduced me to many things “Blog” including the word “Blog” and “Blogger” as well as some of the original bloggers: EvheadMegnut and Kottke.orgKottke.org is still one of my favorite blogs after a decade. Like many start ups, Blogger was a side project that was written over a weekend. Pyra (their parent company) was supposed to be making project management software. It’s interesting to see how early bloggers were the mavericks of modern social networking (though some ideas like putting themselves on webcams 24/7 have thankfully gone away). Blogs made it easier for “regular” people to post — and Social Networking makes it even easier. Facebook in many ways is just the extension of that — allowing everyone to have their own webpage. 

As an added bonus, it’s worth reading the book Nudist on the Late Shift by Po Bronson. Po gives a wonderful history on what it was like to be part of the Silicon Valley tech boom of the late 90’s. Po’s book was so compelling that it pulled many newcomers to the Valley. He felt slightly bad about this after the bust and started apologizing.  

Categories
Product Management Technology

Why the iPad Beat Out the Chromebook

In 2010 we saw the release of the iPad along with the announcement of the Chromebook. I clearly remember my original thoughts on both. I thought the Chromebook was genius. In fact, I’d practically built one myself the previous year. My wife had insisted that her computer was too slow even though she had a pretty fast machine that wasn’t even 2 years old. So after trying a number of solutions, I settled on bringing out a laptop from 2003 and not loading anything on it other than Google Chrome. It was blazingly fast at browsing the web. I thought that many other people would love to buy an optimized version of this machine (my grandparents for instance.) The Chromebook would boot up immediately and have everything needed for an optimal web experience. For the iPad I had almost the exact opposite reaction. I remember listening to an Engadget podcast that asked  Who really wants a giant iPhone and I heartily agreed. Case closed.
But how did things turn out? The iPad turned out to be a transformative device — completely creating the category of the mass market tablet. Apple sold over 15 million first generation iPads and had 96% market share until Q4 of 2010. What I hadn’t realized at the time was that companies have been trying to make a great tablet computer for years but no had been successful at it. An interesting side effect of Apple creating the tablet market was that there is now no need for the Chromebook. Why would anyone buy a PC just to browse the web when an iPad does that so spectacularly. I suspect that Chomebooks might still have a role in businesses — especially when you can lease one for $20/month. An optimized Chromebook would go well with Google Apps if your company were totally committed to the platform.

But the iPad has allowed others to transform the product landscape. One product that comes to mind are online news readers. RSS readers is a great technology but have a number of failings. They feel more like email readers with unread messages more than a newspaper. But look at the iPad’s best take on the newsreader: Flipboard. There are some really great talks online by Evan Doll, one of Flipboard’s founders that talk about what makes Flipboard a great news reader. You can find them at iTunesU in the lectures Designing for the iPad (which was given before Flipboard and the iPad itself were released) and Designing Flipboard. Evan talks about some key things that make Flipboard great:

  • Creating something beautiful that combines design and editorial (like a great magazine)
  • Preventing information overload (an issue of Time Magazine doesn’t overwhelm and scare you like your Facebook News Feed might)
  • Leverage the personal nature of social media to create a magazine personalized magazine

After spending time with Flipboard you realize why Flipboard is a fundamentally different (and better) way of consuming online news.

Categories
Technology

Lessons From the First Computer Interface (E-Mail)

Errol Morris, the famous documentary director of The Thin Blue Line and other films wrote a great piece in the New York Times called Did My Brother Invent E-Mail With Tom Van Vleck? (Parts 1 | 2 | 3 | 4 | 5). As it turns out, Morris’s brother, Noel Morris, worked at MIT on CTSS, which was the predecessor of Multics, which was the predecessor of Unix, which was the predecessor of all the computers that run the internet as well as the Mac OS. Noel was also the person who (along with Tom Van Vleck) wrote the first email program on CTSS.

Morris writes an homage to his brother that looks at some of the very early history of the human computer interface design. In fact, CTSS was the first human computer interface to really exist. These were typewriters jury rigged to a computer to allow interactive input. Before that, programmers had to write programs on punch cards which wasn’t much of an interface at all. Fernando Corbato, one of the founders of time sharing computing systems describes frustrating computers were at the time:

FERNANDO CORBATÓ: Back in the early ‘60s, computers were getting bigger. And were expensive. So people resorted to a scheme called batch processing. It was like taking your clothes to the laundromat. You’d take your job in, and leave it in the input bins. The staff people would prerecord it onto these magnetic tapes. The magnetic tapes would be run by the computer. And then, the output would be printed. This cycle would take at best, several hours, or at worst, 24 hours. And it was maddening, because when you’re working on a complicated program, you can make a trivial slip-up — you left out a comma or something — and the program would crash. It was maddening. People are not perfect. You would try very hard to be careful, but you didn’t always make it. You’d design a program. You’d program it. And then you’d have to debug it and get it to work right. A process that could take, literally, a week, weeks, months — 

But visionaries like J. C. R. Licklider realized that computers could be more than a processing device but an extension of a person’s abilities. His paper “Man-Computer Symbiosis” with one of the first descriptions of the interdependence that humans and computers would eventually have:

The fig tree is pollinated only by the insect Blastophaga grossorum. The larva of the insect lives in the ovary of the fig tree, and there it get its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership…

Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses… The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.

In this post, I’d like to point out how the creation of the first time sharing machines and email are to many of the  product management challenges that people still have today.

1. Showing is more powerful than telling:

FERNANDO CORBATÓ: “So that was mostly to convince the skeptics that it was not an impossible task, and also, to get people to get a feel for interactive computing. It was amazing to me, and it is still amazing, that people could not imagine what the psychological difference would be to have an interactive terminal. You can talk about it on a blackboard until you are blue in the face, and people would say, ‘Oh, yes, but why do you need that?’ You know, we used to try to think of all these analogies, like describing it in terms of the difference between mailing a letter to your mother and getting [her] on the telephone. To this day I can still remember people only realizing when they saw a real demo, say, ‘Hey, it talks back. Wow! You just type that and you got an answer.’”

The article does a very good job of showing vs. telling by creating an email simulator that provides an interactive demonstration of how the original email program worked on the CTSS. It’s much more arcane than you would imagine — even to the point of using typewriters. Try hitting the backspace button when you’re typing a message and  see what happens.

2. Give an early version to your users because you never know how they might use it 

The strongest impacts of an emergent technology are always unanticipated. You can’t know what people are going to do until they get their hands on it and start using it on a daily basis, using it to make a buck and using it for criminal purpose and all the different things that people do.
— William Gibson, interviewed in The Paris Review, Art of Fiction #211

The original time sharing machines were created to make programming and debugging much easier. But to the engineers surprise, people wanted to share data with each other on the machine. In many ways this was the first computer mediated social network. 

TOM VAN VLECK: The idea of time-sharing was to make one big computer look like a lot of different little computers that were completely unconnected to each other. But it turned out that what people really liked about time-sharing was the ability to share data. And so one person would type in a program and then he’d want to give that disk file to someone else. And this was a surprise to the initial CTSS developers who didn’t realize that was going to happen. It’s one of the things that led us to build a new operating system after CTSS — Multics — which was able to do that better. When we wanted to send mail the idea was that you would type a message into a program running on your account and then mail would switch to your addressee’s account and deposit the message there. Only a privileged command that was very carefully written to not do anything bad could do that. And so we had to become trusted enough to be able to write that thing.

3. Incumbents often miss the boat in a big wayIBM Missed the boat on the computing technology but they eventually recovered.

MARVIN MINSKY: Marvin Minsky, one of the early members of Project MAC and director of its AI group, provides an account of an early meeting about time-sharing at IBM. IBM was committed to batch processing. It was part of their business model. “In fact, we went to visit IBM about using a computer with multiple terminals. And the research director at IBM thought that was a really bad idea. We explained the idea, which is that each time somebody presses a key on a terminal it would interrupt the program that the computer was running and jump over to switch over to the program that was not running for this particular person. And if you had 10 people typing on these terminals at five or 10 characters a second that would mean the poor computer was being interrupted 100 times per second to switch programs. And this research director said, ‘Well why would you want to do that?’ We would say, ‘Well it takes six months to develop a program because you run a batch and then it doesn’t work. And you get the results back and you see it stopped at instruction 94. And you figure out why. And then you punch a new deck of cards and put it in and the next day you try again. Whereas with time-sharing you could correct it — you could change this instruction right now and try it again. And so in one day you could do 50 of these instead of 100 days.’ And he said, ‘Well that’s terrible. Why don’t people just think more carefully and write the program so they’re not full of bugs?’”

A far bigger loser was the Post Office

TOM VAN VLECK: Well, I remember vaguely discussing it with people and worrying about what the U.S. Post Office would think of [e-mail] and whether they would tell us not to do it, or tell us that they had to be involved in it. 

ERROL MORRIS: Well, secretly, you were trying to put the post office out of business. 

TOM VAN VLECK: We didn’t realize that at the time, but we were afraid that they would want us to destroy a first class stamp every time we sent a mail message. 

ERROL MORRIS: Really! There would be Noel Morris and Tom Van Vleck stamps.
United States Postal Service 

TOM VAN VLECK: We didn’t want to ask them because we were afraid they would say, “No, of course not.” Or, “We have a monopoly on that.” Which they did. In those days if you sent a box by UPS and you put a letter in the box, you were supposed to destroy a first class stamp. 

ERROL MORRIS: Is that true? 

TOM VAN VLECK: Oh, yes. The U.S. post office had a monopoly on sending mail. So, we didn’t ask until finally some years later, one of the professors at MIT ran into somebody from the Post Office advanced development organization, or whatever it was, at a conference and said, “Hey, we have this thing. Are you concerned with that, are you interested in it?” And he said, “Oh no, forget it, we’re not interested in that.” And we said, “Great, thanks. That’s what we were hoping to hear.” We didn’t ask again.

Categories
Technology

Digital Innovation – Lessons from the Webby Awards

It’s been a while since I took a broad look at what’s happening in the digital world. Conveniently enough the Webby awards was a wonderful conglomeration of many of the great innovations happening on the web and on mobile devices. Here are some of the big trends:

  • Interactive Storytelling: Many companies are merging digital and traditional narrative storytelling. Touching Stories allows you to affect a movie in a number of ways. For example shaking the iPad makes the room shake – and the people in the movie fall down. Another great interactive video is A Hunter Shoots a Bear in which a hunter is about to shoot a bear but chickens out and allows you to substitute something else for him to do. It’s really well done and is a great job of aligning with the sponsor Tipp-Ex which produces correction tape. On a slightly different note, the band Sour does some great videos on how people interact with social media. Sour’s big video last year sparked a very similar Pepsi Refresh commercial. This year’s video is even better. The first person to copy it well in the US market will be heralded as a social marketing genius. 
  • Leveraging Multiple Devices: One of the coolest things we’re starting to see is how applications can coordinate across multiple different devices. Apple’s Airplay is a simple example of the how you can take music or video from your iPhone and throw it on to your Apple TV. Collabacam leverages multiple iPhones together to direct a multi camera movie. It syncs all of the different raw footage and allows the director to piece the movie together in real time. Another application is Remote Palette in which the iPad is your canvas and you can choose your colors from a palette on your iPhone. 
  • Mixing Virtual and Real World: One of the most interesting trends is combining the real world with the virtual world. In Toronto, M&Ms ran a “digital treasure hunt” where they secretly hid 30 large red M&M men while the Google Street View cameras were filming. 3 of them ended up in the final map and customers were challenged to find them inside Google Street View. In another digital / reality mash up, Yahoo set up digital bus stops in San Francisco that allowed riders to challenge other bus stops in games of skill. And for those of you that like limited edition sneakers, Airwalk sold a small set of limited edition shoes at “invisible pop up stores.” To buy the shoes you had to download an app and go to a specific location in order to purchase the shoe. 
  • Reality Tagging: Neer is an application that allows consumers to tag their real life behavior to that of their digital world. It lets you automatically notify family members when you leave work and even remind you that you’re out of milk when you pass the grocery store. I’ve wanted something like this for a long time. It’s not available on my phone yet but I’m a bit skeptical about how well it will work. The problem is that GPS uses an enormous amount of battery power so figuring out when to turn on and off is a big challenge. Another really cool use of reality tagging that I’ve heard about (but haven’t figured out how to do) is through Foursquare. The app allows you to tag locations that were reviewed in the New York Times so you could be reminded about them when you are nearby. 
  • Nike: For years Nike has dominated the mobile and social space. Nike’s original Nike+ virtual running club was the first innovative use of social media and they haven’t stopped yet. Nike has an interesting take on augmented reality this year. Almost every version of augmented reality I’ve seen involves adding a graphic element to a real world scene. But with Nike+ or Nike Boom they do augmented reality with sound. You can link these apps to your Facebook account and start playing your music through their apps while you exercise. If somebody likes or comments on your post you’ll hear cheering during your run. In Europe they’ve done quite a bit more. In Amsterdam Nike allowed runners to draw running tracks making what looks like graffiti. In London they even created a game which involves getting a team together to run to different locations to win prizes. Not to mention renting the side of a giant skyscraper and posting people’s social media messages on it during the World Cup.
Categories
Product Management Technology

Future of TV

2010 was the year that TV officially married the Internet. Actually this was the year that the Internet proposed and TV ran off into the hills. This wasn’t the first year that the Internet and TV have been dating. There have been internet enabled TVs (IP TV’s) for many years. In fact, many of the internet programmers of the late 90s were actually refugees from the failed interactive TV industry of the early 90s.

What made 2010 different was that companies like Boxee and Google developed set top boxes that make it super simple to interact with your TV. While tech geeks could always connect their TVs to their computer, these devices made it easy for non-techies to watch internet video in the living room. While the technology didn’t change, a much better user interface allowed a large portion of the population an easy way to view IP TV. This change scared the networks.

The networks right now aren’t really sure what to do with broadcast TV on the internet. They are still experimenting and almost co-creating this new format with their technology forward consumers. Some are experimenting with new features like offering one of three commercials for the viewer to choose during a break. Experimenting was fine when the only people watching TV online were geeky early adopters who had a penchant for small screen viewing or those that really enjoyed hooking up an HDMI cable from their computer to their TV. However, now that people can easily browse the web from their TVs the networks feel like they’re being rushed into a medium that they aren’t comfortable with yet. We’ve already started to see the major networks blocking Google TV. Technologically it doesn’t really make a difference if you’re watching from a PC or from an embedded PC (Google TV) inside your Television. But from a consumer perspective it makes a huge difference. It means that the entire audience for NBC might stop viewing it through their cable operator and start watching it through the web.

Many of the technology geeks don’t seem to get it though. I was listening to the Engadget podcast and Josh Topolsky, the editor-in-chief of Engadget said, “Why doesn’t Google TV just pretend to be a regular Windows based PC and to get around the blockers.” This is a technology solution that would be very easy to implement. The problem is that broadcast TV is a huge priority for the networks and the internet is currently still an important but futuristic sideshow in terms of revenue. So rather than allow Google TV in, they would shut out all PC activity.

David Pogue of the New York Times says that the whole advertising system isn’t ready for this change:

The reason they don’t, of course, has to do with ads; in the old model, advertisers pay to have their ads shown at a certain time of day, in certain geographical areas, and so on. The networks and Hulu show different, shorter, punchier ads when you’re watching the shows online. Showing them on your TV would violate their advertiser agreements.

In theory, “smart” television should be much more targeted and effective than traditional “dumb” television. Networks would move from advertising based on a show to advertising based on a specific audience. For example, instead of creating ads that appeal to people that watch The Office — you could target single males 25-34. You could even make custom ads that focus on your different customer demographics (e.g., car advertisements with single man, single woman or family.) While in theory that makes a lot of sense, most advertisers don’t really have customized advertisements yet — or even know how they might create them. Each car company only has a few different TV advertisements — partly because it costs a lot of money to make a TV advertisement and partly because you can only spin your brand in so many ways.

So where will TV’s go next year? In my view we will start seeing every TV internet enabled. Even if it’s just to watch Netflix and YouTube, that’s a lot more attractive than 3D TVs.

Rob’s Future Timeline of Television:

  • Phase 1 (~2011-2012): All TVs are connected to the internet. Much like the way the first WiFi connected Blu-ray players in 2009 were followed by a slew of me-too’s the following year. We can all look forward to a Internet connected TV. Google TV is nice and all but until the TV networks get on board, most people will be watching a lot of YouTube and Netflix anyway. While there was a lot of talk at the Google TV launch about creating a new “platform,” people are looking to watch video on their TV. As for the platform, no one wants to check their email on their TV, but the success of concepts like new types of games remains to be seen.
  • Phase 2 (~2012-3): At this point the networks figure out how to finally move from the dumb TV model to the new smart TV model — focusing on targeted ads for specific audiences. They will allow Google TV (or their progeny) to stream any and all content. Advertisers will be more effective and everyone will get along swimmingly. The only problem is that it took so long — this is what should have happened all the way back in 2010.
  • Phase 3 (~2015): Once you can watch all of your TV online, the game starts to change dramatically. We will enter an era of disaggregation where distribution is separated from content — similar to the way electric companies work today. One company will provide the “pipes” to your home while others will offer you various different pricing models. Today you can do versions of this like buying your content a la carte (Apple TV) or a bundle of older movies and TV shows (Netflix streaming). More importantly you could buy your entire cable service from anyone. You could buy a bundle of channels, shows and movies all together. For example, you could buy a special dinosaur package that had premium Discovery Channel content, interactive games and even museum tickets. Another opportunity would be to buy local programming from where you spent your childhood up in Oregon. And someone else might be offering a “DVR in the sky” that would provide every possible show on demand.
Categories
Product Management Technology

Why Privacy is About to Change

The world of data privacy is about to change. Currently most companies feel free to treat your personal data as an asset that can be leveraged by their company. As you no doubt have realized, many companies sell your personal information to other parties to cross sell their products. For example, my friend Marc once put his dog’s name when answering an online promotion only to see that his dog started to get a lot of related mail over the next few months.

This problem is getting much worse. With the rise of social networking and people’s dependence on the Internet, much more of our private information is now available online. For example, banks often use “private” information to verify your identity when calling customer service. But now such information like mother’s maiden name is easily accessible via Facebook.

My prediction is that two things are about to happen. One is that people are going to start to become much more concerned about their privacy as they continue to put more and more information online. Secondly, some company with lots of private data (like Facebook) is going to play a bit too fast and loose with privacy, causing a public catastrophe. As the importance of privacy increases and companies fail to safeguard it, we’re looking at a major change in public policy on privacy. Likely this will mean that consumers will own all of their data and companies will need explicit permission to share it with others.

When I talk to people about this, I often get the response, “This is technology, it’s no place for government policy.” But once technology becomes entrenched into our everyday life, that is exactly when it starts getting regulated. Remember that 100 years ago electricity and telephones were the top technologies of their day and now they are two of the most regulated industries on earth.

As an example, I’d like to talk about another new technology that totally changed the world. It was introduced in Seattle in the early 1960s at the Seattle Artificial Kidney Center. This was one of the first dialysis machines in the world, made possible by advances in technology allowing a permanent stent to be placed into the body. This allowed people to have regular treatments where blood is moved outside of the body and cleansed by a machine. These machines were greatly oversubscribed due to their lifesaving nature and extremely limited availability.

The head of the center Belding H. Scribner knew that making a decision on who should get treatment was incredibly serious. He created the Admissions and Policy Committee to decide who deserved treatment the most. These decisions were based on characteristics other than medical fit — the patients were already screened by a panel of doctors. This committee was a cross section of society composed of seven lay people – a lawyer, a minister, a housewife, a state government official, a banker, a labor leader, and a surgeon who served as a “doctor-citizen.” The group considered the prospective patient’s age, sex, marital status, net worth, income, emotional stability, nature of occupation, extent of education, past performance and future potential. Essentially they needed to determine which of these people was “worth” the most.

While Scriber’s solution was a good one, it was shocking when it reached the national stage. In November 1962, Life magazine ran an article called, “They Decide Who Lives, Who Dies.” While the article started as a study into this new wonderful and life saving technique, it quickly became a study of what the author referred to as: The Life and Death Committee.

This article sparked a national conversation and led to the creation and popularization of bioethics. The inventors of dialysis were amazed that public discussion focused on the decision of who got the treatment rather than the amazing ability of the machines to transform what was once a death sentence to a chronic condition.

Today when there are issues on life saving decisions based on limited availability (i.e., for transplant organs) a person’s worth is no longer considered. Doctors use a number of factors such as age and health to wean down the list. Once patients are one the list, organs are distributed based on severity of the condition, the time on the wait list and the geographical distance between the donor center and the hospital.

It’s tempting to think that Bioethics is a much greater social issue than personal privacy. But it’s not. Bioethics has just had more time to mature and enter the social consciousness. In fact, I was once in a business school class where we were presented with the Seattle Artificial Kidney problem of deciding who should live. This was a case study used at both the beginning and end of the class – essentially to show how much we’d learned during the class. However, it wasn’t in a Bioethics class but a Decision Sciences class!

We were given the following problem: “Five people were dying of kidney disease and we only had the ability to save one of the five.” We were given short bios of each person, e.g., a 50 year old doctor with 3 children who is working to cure cancer. We were to rank order which of the people we should save. While the exercise was very interesting and really showed how to rank order on a number of criteria, no one brought up any of the ethical issues. The teacher even seemed unaware of them. Even today with five decades of Bioethics behind us, whole classes of students can ignore the social issues when presented with a technical problem to solve.

In short, technology can often go unhindered while it is being developed; however, once it becomes enmeshed in the social fabric, decisions are not made on technical merits but on how they affect society as a whole. What was once a technical issue becomes a social one. Or to quote Spiderman’s Uncle Ben “With great power comes great responsibility.”

Categories
Technology

The Death of the iPod

Since its introduction, people have been prophesizing the death of the iPod. They saw the Ipod as just another Apple creation that would end up like the Mac. When the Macintosh was introduced, it was by far the most innovative and successful product on the market and sold very well. It had a graphical display and mouse that were far easier to use than the IBM machines available at the time. However, Apple never partnered with other companies, hoping to build the hardware and operating system itself. Eventually a consortium of companies succeeded in building a “best of breed” product that exceeded the abilities of the Mac at a cheaper price. IBM (then Compaq and then Dell) built the hardware, Intel built many of the chips, and Microsoft built the Operating System.

It’s easy to believe that that Apple will always behave like it did in the Macintosh days and will always end up with the same results. John J. Sviokla wrote an interesting piece in Fast Company called In Praise of Ecosystems that summarized this argument well. On a personal level I think of it this way. If I buy a lot of music from Apple today, will I be able to play it on my portable audio device in five years. If history is a guide, Apple will make it very hard for me to do that. Therefore, I would rather not buy Apple protected music today, but that’s just me. However we may be in the minority. While I think that it might not be the best idea to trust Apple to let me play my music in the future (even though in theory I could burn to CD or create an MP3), Apple has already sold 500 million songs.

I predict that one of the following scenarios will take place:

A better ecosystem will evolve. In this case, as stated above, Apple will lose its lead and get pushed out of the market by the other companies.

Portable media devices will eventually converge. I feel that this is almost a guarantee. At some point people are going to get sick of carrying around their iPods and their PalmPilots and their cells and demand one device. Also, the cost of each of these devices, or to be more accurate “capabilities” will drop far enough to make such a device very affordable.

While Apple has been a creating brand loyalty for the iPod, it is unclear how long this will last in the face of decreasing prices. Once your telephone can play gigabytes of music, would you still really want to have an iPod? You might think that Apple’s brand loyalty would keep people linked to their iPods; however, I predict that the future of the iPod will be very similar to the current situation of TiVo. When TiVo was created many people fell in love with it and fell in love with the TiVo brand. It allowed users to seamlessly “timeshift” their television viewing in a way that was far easier than traditional VCRs. Unfortunately for TiVo, this technology has become embedded in many digital video recorders (DVRs) that can be rented from a cable company for a mere $10 a month destroying much of TiVo’s business.

Apple has to be thinking about how convergence will affect it in the future. Is it thinking of making a cell phone or PDA? If it wants to stay in the portable electronics business it probably should.

Apple will continue to leverage its brand with continued innovation. This seems to be the path that Apple is on. The company appears to stay just ahead of the curve as music players continue to evolve. After the initial iPod, Apple introduced the mini which turned out to be an incredible hit. Later still it introduced the shuffle which quickly grabbed an impressive market share.

I am still not sure how the brand will affect Apple’s future. When I decided what personal audio player I wanted to buy I realized how strong Apple’s brand power is. A friend of mine asked me, “Aren’t their players that are better.” At this point, I don’t know if there are players that are any better but there are certainly players that are a better value. However, everyone who has these “value” players invariably looks like someone that is too cheap to buy a real iPod. Essentially, an iPod has become a status symbol. I was listening to the TWiT podcast and they mentioned that Apple’s branding campaign even goes as far as the earphones. No one knows what kind of player you have but everyone knows if you are listening to iPod headphones. However, this branding becomes more difficult once people see portable music players as a commodity device.

However, Apple may have already set a standard. At this point, the iPod has actually become a standard. Just search for iPod accessories on Amazon.com and you start to understand what I mean. There are many different ways to interface with an iPod. Essentially it’s become a platform to play music in your house, in your car, or even record your voice. The simple fact that there are so many accessories available for the iPod strengthens its monopoly power.

In the future, we will probably see Apple leverage its iPod monopoly to attack other markets. It is not likely that people will have iPods a decade from now. However, at this point, it looks like the iPod may dominate the market until audio players are usurped into some sort of other technology (like cell phones). Steve Jobs may already understand this and assume that the iPod will go away and the company will make most of its money from its iTunes Music Store. This might make some sense because the iTunes music store represents actual capital that people have invested. With 500 million songs downloaded, that means that people have invested $500 million in Apple protected music.

Therefore, Apple’s future strategy it might look like this. Apple realizes that the iPod is doing exceptionally well right now and has an enormous market share of the portable player market. At this point, Apple needs to figure out how to leverage that monopoly to attack other markets. Does it want to force every hardware manufacturer to use the iTunes Music Store? Is it using the iTunes Music Store as a first mover advantage into digital music?

Apple needs to figure out how it can make money off music in the future. I see three possibilities: through hardware sales, licensing technology or selling online music. I think hardware sales will be a hard business for Apple. At this point, not licensing its technology seems to have been good for apple. Many people have bought the iPod and those who want to buy music end up buying it from the iTunes Music Store. The only advantage to licensing it’s encoding technology is to make their encoding technology a standard (which due to the number of users has become a de facto standard). In the future though, online music is where Apple will probably make most of its money. That means that they want as many people as possible able to buy from the iTunes music store. If they want to make sure of that, why aren’t they licensing their technology as soon as possible.