Whoever wins AI will own the future – or so says Andrew Ng, Stanford professor and one of the original DeepMind team members.

Judging by the billions being invested in artificial intelligence, Big Tech would seem to agree. But are the interested parties going about it in the right way?

Take Sophia, from Hanson Robotics. This media darling, chat-show guest and UN employee is the latest creation of Dr David Hanson.

The founder of Hanson Robotics is very specific about his goals. He wants to create genius machines that will surpass human intelligence – achieving “singularity”, as it is known in the field.

In addition, these machines will be endowed with the qualities of creativity, empathy and compassion – which are the three features needed to pass the Turing Test for real AI.

Hanson’s long-term aim is to design robots that can evolve to solve complex world problems such as poverty, sustainability and equality.

These are laudable aims, but there are pressing issues about AI that we must deal with first; namely, how to ensure that new technologies and their applications are developed in an ethical manner.

AI should augment the abilities of humans and benefit society as a whole. It should not harm human beings, however unintentionally.

We are still some way away from the creation of truly intelligent machines, capable of the subtlety and nuance of humans. But I do believe it will happen. This means we need to have rules in place to address the fear that AI will displace, harm or enslave humans.

Warnings from Stephen Hawking and Elon Musk should not go unheeded. We need to avoid at all costs the scenario envisaged by philosopher Nick Bostrom in his book Superintelligence, in which a machine that has been programmed to accumulate paperclips ends up taking over all the world’s resources to achieve its goal.

Bostrom’s point is that innocuous goals can turn dangerous, and that while AI might be able to avoid human error or bias, it could make other mistakes, such as fixating on paperclips.

Clearly, safeguards are needed. I believe they should go beyond the three laws of robotics first articulated by sci-fi author and legend Isaac Asimov, namely: 1) a robot should not injure a human being or, through inaction, allow a human being to come to harm; 2) a robot must obey orders given by human beings except where such orders would conflict with the First Law; and 3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But it’s not as simple as Asimov makes out. Take the instance of a driverless car having to choose between crashing head-on into a bus full of children, turning left and killing a 60-year-old physics genius or turning right and killing a mother and her baby. What does it do? Somewhere within its complex algorithms, there must be a framework of rules defined by our values and our compassion as humans.

Even as we work out what these safeguards should be, we need to start educating people about what AI can do now and what it might be capable of doing in the future.

At present, AI takes many forms but is essentially the automation of what humans do. So far, most AI uses have had a single purpose – translating a document, driving a car, answering questions, keeping the books – without human input.

One shortcoming with AI applications to date, however clever they might be, is that they are lacking in empathy. It is what gives them away. For me, true AI is when you can’t tell that you are dealing with a machine and we’re certainly not there yet. But I think we’re getting closer.

Chatbots used by banks today are evolving. Today, they can choose from a menu of responses when dealing with a customer. In the future, they might be able to assess the bigger picture for the customer.

For example, say a customer is overdrawn and faces a penalty. But the overdraft is linked to the fact the customer lost her purse and was unable to pay in a cheque.

The loss of her purse would be known to the bank because she cancelled her cards. A truly intelligent and empathetic chatbot would take this into account and waive the fine. It would be able to reach this decision by accessing richer and more varied sources of data that are accessible to it now.

As data will be the mother load for monetising AI, it will be vital to educate people about the value of their personal data and the importance of keeping it safe. A good rule of thumb is that if you don’t know who’s asking for it – an online quiz, for example – don’t share it.

If banks engaged in educating their customers – helping them to assess when their data was at risk and when it was safe to share – they would be providing a far more valuable service than just banking.

This is especially true of situations in which banks act as an intermediary for data – in trading platforms, for example – or when banks trade data on behalf of the customer.

So, when Ng says that “whoever wins AI will own the future”, it’s important to remember that winning AI means not just developing genius machines; it must include developing the rules that will enable AI to function according to our common values. Anything less is losing.

Dharmesh Mistry is chief digital officer at Temenos.