Digital Subconscious

Because neural nets essentially program themselves, however, they often learn enigmatic rules that no human can fully understand. Sometimes it is very difficult to find out why a neural net made a particular decision.

When Google's AlphaGo neural net played go champion Lee Sedol in Seoul, it made a move that flummoxed everyone watching, even Sedol. We still can't explain it, even though in theory we could look under the hood and review every weight in every node in AlphaGo's artificial brain, but even a programmer would not glean much from these numbers because their purpose (what drives a neural net to make a decision) is encoded in the billions of diffuse connections between nodes.

Many experts find this opacity worrying. It doesn't matter much in the game of go, but imagine a driverless car that has a fatal accident. It's simply not acceptable to say to an investigator or a judge, We just don't understand why the car did that.. What if an autonomous drone strikes a school? Or a loan-evaluation program disproportionally denies applications from minorities?