Categories
Product Management Technology

The Future of Payments

Disclaimer: I work at Amazon but this writing does not represent Amazon in any way. Opinions written here are strictly my own.

When I was working at Citi Cards, I was under the impression that people were spending a lot of time figuring out what credit cards they should have. Were they going to get points or miles? Weren’t they going to be so excited that they could redeem their points with Amazon? Of course, working in a credit card company I was thinking about this all day and I lost sight of the fact that my customers had far better things to do with their time.

Categories
Ideas

When AI Get’s Personal OR You’re Not Supposed to Talk About That!

A few months ago Microsoft released Seeing AI. This is a tool created by a blind product manager to help blind people. It’s trying to using computer vision to replace lost sight.

The most interesting piece is the person functionality. This is a fairly transparent implementation of Microsoft’s Face API. The Face API has a number of characteristics that it can determine including hair color,  emotion,  glasses, facial hair, makeup, smiling, gender, and age. Computer geeks, take a look at Face API, it’s pretty awesome. Here’s an example of the person functionality from Seeing.ai using the Face API.

Microsoft’s Seeing AI

It’s a great party trick to show your friends how AI can figure out all this stuff. The most interesting characteristic is age. Microsoft thinks so too and created an entire website called how-old.net.

So I started bringing it out at parties. But there was the problem. People who skewed older than they really were started saying “Hey, that’s not cool.” I started to realize that the app didn’t have a lot of tact.

It was like a little kid saying, “Mommy, that lady looks 45.”

And the woman saying back, “My Lord! Don’t you have any manners!”

This is similar to a scene in the Netflix series Atypical about Sam, a character with Autism Spectrum Disorder (formerly called Aspergers Syndrome). Sam doesn’t read emotions very well and is often too honest — not taking into account other people’s feelings.

At one point Sam made a list of the Pro’s and Con’s of Paige, his prospective girlfriend. Paige found an imprint of the list and used a pencil to read it.

Paige with Sam’s list of Pro’s and Cons about her.

To paraphrase their conversation:

“Why would you do that? You called me bossy and said I’m always interrupting people,” said Paige.

“But I also said that you had very clean shoes and had a nice neutral smell. So there were some good things in there,” said Sam.

“Ugh. You’re just not supposed to write that stuff down. It’s rude.”

And that’s the inherent problem with the way Face API displays people’s age. Making these things too transparent is just rude.

From all this, I’ve learned 3 things.

  1. It’s kind of creepy how AI can take things from the real world and “know” things about you.
  2. When building a computer program, features like “age” are useful in doing things like matching or making predictions but should generally be hidden from the end user.
  3. It’s much better to use this data for a different purpose, like figuring out which piece of artwork you look like.

Update August 16, 2018: Amazon’s Face API is a lot more sensitive in its demo. It uses the word “seems” and not focusing on a specific age but using an age range.

AWS Face API Demo

 

Categories
Ideas Product Management

Click Here to Kill Everyone. A Security Expert’s View on the Internet of Things.

There are a lot of articles about Artificial intelligence and what it will mean to the world. People are asking questions like Where is Technology Taking The Economy? and Where Machines Could Replace Humans?. One thing that’s clear is that computers have become an integral part of our life.

Computers used to be ancillary items that would help us get things done. For example,  a GPS system was just a better map. If the GPS failed, we could always go back to a map to find our way home. Today, we can’t live without computers. Take driving for instance. We don’t drive our cars anymore. When we turn the steering wheel or press a gas pedal we are actually sending a signal to the computer that that drives the car.

Bruce Schneier, one of the world’s top security experts, just published an article about the dangers of this new environment called Click Here to Kill Everyone. With the Internet of Things, we’re building a world-size robot. How are we going to control it? He’s also released the book. 

Giant robot? What is Schneier talking about? He says:

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and — increasingly — in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

This reliance on computers changes the way we should be thinking about computer security. Security has three components: confidentiality, availability and integrity. In the past, when people were thinking about security, they were most concerned about confidentiality (e.g., someone was reading their email, someone stealing their identity). But today there’s a far bigger problem in availability and integrity. Shutting down your car (availability) is a far bigger problem than someone knowing where you are all the time (confidentiality). And modifying your car (integrity) to prevent your brakes from working on the highway is the biggest problem of all.

But even cars aren’t the biggest problem. It’s all these smaller things that we’re connecting to the Internet — the Internet of Things. Last year we saw some enterprising hackers marshal together millions of DVRs and webcams to attack the core infrastructure of the internet and bring websites like Twitter, Amazon and Netflix down. Here’s the basic problem:

  • Prioritizing functionality and cost over security. While companies like Apple and Google spend hundreds of millions of dollars on security and pushing out updates, there are many smaller companies making connected devices that don’t care much about security. They often aren’t made in a way to update this security. And because consumers don’t really care if their DVR or refrigerator has good security, it’s unlikely that this will change. So now you have devices connected to the internet that are vulnerable both as victims and coopted attackers. Because these devices are all connected in an ecosystem, a failure of one seemingly unimportant piece can cause far bigger consequences like how an unsecured fish tank connected to the internet let hackers infiltrate a casino.
  • Connecting everything to the internet. Now because all these devices are connected to thet internet, you’ve got to protect against the best hackers in the world. Just look at how North Korea is trying to finance the country through ransomware. I’m not convinced that I’m going to win a hacking battle with a nation — are you?

So how do we fix this? Schneider doesn’t have great solutions but he has a couple:

  • Regulation. While regulation is normally anathema to computer programmers, for cybersecurity it is needed. There are a few ways to look at this. First of all, the internet as a whole is a utility. In order to maintain the availability of the utility and protect against catastrophe, it’s reasonable to regulate it. Secondly, you can view security as a public health system. In order to maintain the health of the internet, we need to ensure that there are a limited number of viruses on it and we take those viruses seriously. Otherwise, these viruses can imperil the health of the entire system. Schneir’s point is that regulation is inevitable so we should start thinking about it now.
  • Disconnection. Why are we connecting everything to the internet?! Everyone is so excited about connecting everything to the internet without thinking about the risks. How much do we lose by disconnecting a power station’s controls from the Internet? It’s probably a little more expensive to have a person or two stationed directly at the plant. But if we leave them connected, there’s the real danger that they can be attacked by a hacker and brought down or destroyed.

In the excitement over all the possibilities that Artificial Intelligence and the Internet of Things can bring, we need to be vigilant about protecting the ecosystem. But people remain far too optimistic about the future. Just today I saw an article titled Cyber Attacks on U.S. Power Grids Can Be Deterred With Password Changes that should have been titled “US Power Grid Has Multiple Security Holes.” Oh, and taking down a power grid is already being tested in Ukraine.

Addendum: The full book came out. Schneier focuses on 3 scenarios throughout the book: The first is a cyberattack against a power grid. The second is murder by remote hacking of an Internet-connected car. The third is the “click here to kill everybody” scenario, involving replication of a lethal virus by a hacked bio-printer. The first example has already happened. The capability has been demonstrated for the second. The third remains to be seen.

Categories
Uncategorized

The Ethics of AI

The Ethics of AI: We are becoming more and more reliant on Artificial Intelligence, mostly because it keeps getting better more quickly than anything else. More and more, we’re relying on AI systems to make important decisions like who to hire at work or who to release from prison, even when these models may have strongly ingrained biases based on the training data. And as self driving cars become more of a reality, we will continue to become more reliant on machines. This brings up an interesting ethical question on self driving cars in specific — in an accident that can not be avoided, how does the car prioritize the life of the driver and passengers vs. others? How many injuries would need to be avoided of the car to prioritize the bystanders over the driver. Mercedes has already come up with a statement on this question “You could sacrifice the car. You could, but then the people you’ve saved initially, you don’t know what happens to them after that in situations that are often very complex, so you save the ones you know you can save. If you know you can save at least one person, at least save that one. Save the one in the car.” Whether or not it’s the right answer, people will want their self driving cars to do everything possible to save their own lives.