In attempting to understand where tech’s going, and how we’re educating on the subject, I realise that no one – not even the experts – have a clue about the specifics.

But we do have an idea about its general direction. We know that AI is going to be more widely used, for example.

It’s already used by Netflix and Amazon, for their recommendation engines. It’s used in hospitals to make diagnoses from X-rays better than human doctors can. It’s used in legal contexts to sift through evidence better than paralegals, and it’s used to fly planes (usually called autopilot).

Because tech changes quickly and drastically, it often raises moral questions as well as operational ones. When a self-driving car is going to crash, should we programme it to save the passengers in the car or the people in the queue at the bus stop who’ll be hit if the car reaches them?

So what’s the point of AI if it raises difficult questions and threatens our jobs?

The goal isn’t to make computers to think like us to replace us, it’s to make computers think in ways that we can’t or prefer not to.

When AI drives cars, it doesn’t get road rage. It won’t be thinking about the argument it just had. It won’t be trying to remember to stop at the red light just as it’s trying to remember those things to buy from the supermarket that didn’t make it on to the shopping list. Nor will it wonder who Watford are playing at the weekend.

AI always works without distractions. We don’t.

>>>Playlist<<<

Share This