Futurist Gerd Leonhard to keynote at HfS Cognition this September

|
Futuring Gerd Leonhard will keynote at HfS' Cognition Summit this September

Futuring Gerd Leonhard will keynote at HfS’ Cognition Summit this September

Now we have finally managed to get past that frightfully riveting conversation about doing some rudimentary process automation with our invoice processing and customer collections (aka RPA 1.0), we can finally get to the heart of what new technology capability  – much of which is already here – is really doing to our world.

With human brain power and computing power on collision course to become one, the enmeshing of human behaviour and thought processes with self-learning and self-remediating cognitive systems is set to confuse, frighten and – ultimately –  inspire us to change our whole approach to managing our technology investments, making data meaningful, collaborating with work colleagues and creating new business models.

This is our goal, this September, in White Plains, New York, where we are, once again, bringing together the diverse stakeholders of the operations and services industry to get past the fear, and find the inspiration to drag us out of this transition phase, in which we currently find ourselves.

To this end, at HfS we are thrilled to have persuaded Gerd Leonhard, CEO of The Futures Agency, to keynote our HfS Summit, “Cognition: Welcome to the Future of Services”, September 14 – 16, 2016 in White Plains, New York. So we sat down with Gerd to get his critique of the future of technology, the emerging quagmire surrounding Digital Ethics and the true potential of Artificial Intelligence..o and how this will all potentially impact our jobs, our societies, our lives, and our humanity…

Phil Fersht, CEO and Chief Analyst, HfS: Good evening, Gerd. Great to have you on the HfS platform today! We’re very excited to have you join us at Cognition, our coming flagship event in New York this September. But maybe before we start, could you give me just a little bit about your background and how you’ve ended up as such a visionary in the Artificial Intelligence (AI) Space these days?

Gerd Leonhard: CEO of The Futures Agency: It’s a long story. I’m a futurist. But I started as a musician and producer, and then in the late ’90s I went on the Internet and I did a bunch of music startups. It was an interesting time, but I was too early and ahead of my days. I think I realized in about 2001, when everybody went bankrupt that my vision was better than my implementation. So basically I realized I was very good at seeing things ahead of time a little bit. Then I wrote a book called “The Future of Music,” which led to me being called a futurist.

Then over the years, the last 15 years, I’ve branched out into media, marketing, advertising and then ultimately, of course, technology and software. And now I run this company called The Futures Agency. There is about 25 of us and we’re essentially what we call futurizing businesses. So we help companies, organizations and governments, to reinvent how they do things—what they’re going to be in five years. And, of course, a lot of that has to do now with big data, artificial intelligence, cloud computing and digital transformation.

Phil: In terms of the way you see things going in the AI space, you’ve come up with some very interesting thoughts around Digital Ethics. The fact that governments and society need to take a stronger viewpoint, even a stronger regulatory viewpoint in the impact of technology on society and jobs and that sort of thing. Maybe you could share a bit more of your views and opinions here?

Gerd: Yeah. I think it’s pretty straightforward, Phil. First of all, we have to distinguish between intelligent assistance, IA. Which is really what we’re seeing these days, mostly. Then AI and then AGI, which artificial general intelligence. So IA really is anything like Google Maps and Siri—things that give you a simple assistance by having a sort of a minor brain in the cloud. You can speak to Siri but not much will happen. You can use Google Translate, but it’s kind of all minor stuff. There is no super computer in the cloud doing what Watson does.

Artificial intelligence as the next step, where you can actually have deep learning computers that are not being programmed—and this is the Google’s DeepMind, for example. This will be the self-driving car that says, “OK, I can learn how Gerd is driving. I can learn the environment and then I can act accordingly and add value beyond the programming.” That’s really what we’re seeing as the big hope of artificial intelligence: to solve very large, very complex problems like social security, air traffic, environmental control, desalination.

To basically do that with computing power that is unheard of, like a million times that we have today. At that point we wouldn’t really understand how this artificial intelligence does things. We would just know that it does it infinitely better than we do. So that’s all happening in the next few years. We’re going to realize very quickly what Ray Kurzweil calls the Singularity, which is maybe six or seven years from now, when the first computer will have the capacity of the human brain.

24426389249_2093100491_oAt that point, it becomes entirely feasible to outsource major decisions to machines that we don’t know, whether they are right or not and how. At that point, it’s basically imperative for us to use what I call the precautionary principle—which is to say that we should use this, but we should make sure that we can still control it in some way, or at least find a human leverage to it. It goes all the back to the discussion of the three laws of robotics, right? But basically at a certain point there, there is a very big question about technology going beyond our actual control and actual understanding. At that point it may become dangerous, not in a sense of Ex Machina or so, but in the sense of technology basically ruling what we do, so as a consequence we become kind of like machines. We’re forced to comply.

I think that creates all kinds of issues and ultimately, when I speak to businesses, everybody wants to use technology to be more efficient and to make more money, and to create some margin. Which is understandable. But in the end, if you have a business that’s just algorithms and just efficiency, you have no value. Because it’s a commodity then. That’s important to remember.

Phil: So Gerd, looking at how the impact on the future of work and jobs dominated the conversation at our recent event in San Francisco, we came to conclusion that a lot of the kind of the work that’s been automated today aren’t jobs that we would create today, in any case. These are legacy-type jobs that were created maybe 10, 20, 30, even 40 years ago to fulfill tasks. Now they should be automated and should be run electronically. So the bigger issue now is: How do we create work, not necessarily save legacy jobs? Is this something that you would agree with or do you think we need to go further than that?

Gerd: That’s a complicated question, Phil. First of all, I think we need to decouple work from income. So the possibility of doing work that you want to do is extremely fulfilling, but it may not make money. For example, building a playground for your kids or staying at home with your grandmother or writing a book, or whatever. Lots of us think about work as something that makes money—and that will have to stop, because, basically, technology will afford us to work a lot less for the same money because it becomes infinitely efficient and abundant.

Just like music and films are already abundant, other things become abundant, including food, transportation, electricity, energy. That’s about 15 years away. So at a certain point we have to say, it’s great if we make money with this. But maybe we can just make money and also do the work that we want to do, at which point it’s not really important if technology would remove that or not, we would do it anyway just because we want to.

The other thing is that, there is some work that can be automated that we probably shouldn’t automate. So this is a very important decision for government, for employees, for employers and for HR departments. For example, hiring and firing, in my view, should not be automated. It should use the tools, just like I use Trip Advisor to evaluate a restaurant, but I would never rely on Trip Advisor in a sole, exclusive way. It’s just a data point. So if I use IBM Watson for hiring or firing, I think that’s kind of taken to the extreme. It would probably be wrong and also unethical in many ways.

So I’d say a lot of people are always saying that technology removes the bias as an objective thing, and that’s absolutely not true. Technology will have whatever buyers were using to build it. Or whatever data give the AI to learn, is going to create the same kind of bias, right? So we shouldn’t pretend that there is no bias. We should just take it as it is and take it as a data point for a human evaluation. My view is that the jobs are shifting up the food chain, the Maslow Pyramid, basically from the very simple taken-care-of-stuff, to the meaningful or the self utilization, the purpose, the brand, the story telling, the emotional, the human thing. That’s basically what all our jobs will be in the future.

Phil: So we obviously have a big election going on in the US right now…

Gerd: Ah, really?

Phil: This is not really being discussed and it should be, in my opinion. Looking out in the future, at job and wealth creation and the impact of technology, it doesn’t seem have hit the big political conversation yet. Do you think this will happen in the next couple of years? When do you think this will become a much bigger, more societal conversation?

Gerd: We’re having a conversation in Europe everywhere now. In Switzerland, we had a referendum on the basic income guarantee. It didn’t go through this time, but we voted on it and it’s been discussed in lots of places. Basically, I think the government and the US government is looking at this. I know they have a special commission that convened for artificial intelligence. I dare say that Trump probably doesn’t know what it is, but in general I think all governments need to look at this and say, “OK, technology will really solve a lot of these issues and make it abundant, make it possible, make it cheaper in the end.”

For example, right now the cost of healthcare and medical are rising, they’re not decreasing. But ultimately when technology solves this, they can decrease. And that’s a very positive thing. In return, there are other issues that governments have to look at. For example, application of authority, dependency, addiction, which is a huge thing—we’re already at the point of where we feel kind of lost if we don’t have a mobile with us. That’s kind of a childish thing, right?

But think about five years from now, when you’re literary connected to the cloud at all times. You cannot function, like air or like water, if you’re not connected to the cloud. I think at that point we’ve closed the barrier toward human existence that is probably scary. And it’s probably not a good idea, just because it’s more efficient. That’s something to compare. I always say, we should embrace technology but we should not become technology, because when we become technology we lose the value that we have left—which is human, humanity.

Phil: I agree with you there, Gerd… maybe governments need to say, “This is where we draw the line, in terms of when technology does start to take over our lives and our jobs in a way that is detrimental to society at large.” Do you think that is going to happen, when you get to the point where technology becomes a destructive tool rather than a productive tool to society? Do you think that’s going to happen? How could this happen in a way that that could be feasible?

Gerd: I have to say, I’m, in principle, not necessarily for regulation. I’m, in principle, for being able to try things out and innovate the market, and push things forward as we’ve done in the US for a long time. However, there is potential existential risk here. Inventing things that are larger than us, and it’s like we don’t want the guys in Bern, in Switzerland, in the lab, we don’t want them to experiment with the black hole. They could basically make crater out of Switzerland. We need to find a way of saying, “Do what you need to do but without that risk.”

It’s the same with artificial intelligence, with genetic engineering—those are the major two things. And with material and nanoscience, we need to find ways to try this and to investigate it. Then we have to have control mechanisms in place. For example, if we can actually beat cancer by genomic engineering of humans, which seems doable, then we can also build super humans right? Who is going to be in charge? Until we have solved that problem, then we can’t do it. It’s just like the internet of things, we can’t realize the Internet of things until we figured out how they use all that data to its 98% benefit.

Because otherwise, my digital money, my health records, my driving records, my whole, my everything will construct a giant profile of me. Which I’m not afraid of, but I think it’s extremely dangerous when you go about it the wrong way. Until we’ve solved that, then trusting the system to take care of itself, I would say is dangerous. Look at Facebook or Google, Baidu, that sort of thing. They’ll do whatever makes money, which is a good ride in the capitalist system. But it’s still a fairly tilted relationship, Phil, right?

Phil: It is, Gerd. Finally… what can we expect to hear from you, when you speak in September? Anything up your sleeve that you’d love to share with us in advance…

Gerd: Yeah. I’ll share some tips on we can use these amazing technology that are happening all around us in the next five years without getting too many of the unintended consequences. And I think it’s also really important to realize that efficiency isn’t the final destination. People are always looking at technology and saying, “Oh, great, I can fire 80% of our call center staff to make it more efficient.” That’s probably all true, but at the same time, what is the value of a business that has so little people and so little purpose, and so little humanity in it? We have to think about what that means. Maybe sometimes it’s better not to automate and to keep the human value, even if it’s more expensive. So we have to think about what that means ultimately as to where we want to go and where we want to be in five years. I always say, it’s a combination of data, intelligence and humanity that will be the winning factors in the end.

Phil:  Gerd, we cannot wait to meet you again and hear your talk, and I can tell you you’re going to have a very excitable, knowledgeable and passionate audience – see you in September!

Gerd: Same here. Sounds great, Phil.

Apply for a Seat!

Posted in : Business Process Outsourcing (BPO), Cognitive Computing, Design Thinking, Digital Transformation, HfSResearch.com Homepage, HR Strategy, IT Outsourcing / IT Services, Robotic Process Automation, smac-and-big-data, The As-a-Service Economy, The Internet of Things

Comment0

Leave a Reply

Your email address will not be published. Required fields are marked *

    Continue Reading