You just can’t lose… with Chris Boos. Time for an AI reality check

|

There aren’t too many people you can listen to today where you feel all those sticky layers of hype just fall away from your brain, as this guy actually knows what he’s talking about and (as we English love to put it) he just doesn’t mince his words. So, after a terrific meeting with Hans-Christian (Chris) Boos, Founder, and CEO of leading AI platform vendor arago, I pinned him down to share some of his views with the HfS crowd…

Phil Fersht (Founder and CEO, HfS Research): Chris – you’ve been a terrific guy who adds so much energy and colour to the intelligent automation industry… but can you shed a little light on your story?  How did you find yourself setting up the business in 1995?  Was the focus on intelligent automation back then?  I thought we were all going nuts about ebusiness!

Chris Boos (Founder and CEO, Arago):  Phil – I originally wanted to do AI research at a university and then I saw how slow academic research is today with the way it is financed. I chose to do it inside a company instead. We could control the pace there. We setup arago to research general AI and my belief has always been that general AI is all about automation. If it is intelligence – even the quite boring artificial version – I guess you could say that smart automation was my goal, then.

Most people are surprised about the research phase. But if you look at most people who are doing significant work in AI they all plan or have done a roughly 20-year research phase. The one thing that is special about arago is that we financed it ourselves. We split the company down the middle, half was doing research and the other half was doing projects to get the money. That way we did not only get to do basic work on AI and make money to finance the work, but we also had a testbed for all our components in real businesses. A brilliant idea I cannot take credit for, it was my uncle who founded arago together with me who came up with the model. It worked out really well for us. We did pure research from 1995-2008, then used our own toolset to start automating IT operations and slowly turn it into a product and do some deployments till 2014, then scaled automating IT operations on top of more static approaches while collecting a dataset that is descriptive of all kinds of companies in all kinds of industries and 2017 we finally started applying AI generally in other industries and processes than IT.

By the way, 1995 was before the e-business boom started. I remember we did the first online banking on the web in Europe then and the page said, “your browser should support tables”. Can you still remember these days?

Did you ever expect to be where you are today? 

Absolutely not. I am still being surprised every day. If you had asked me about feeding animals with an AI a year ago, I would have looked at you like you just tole me the aliens had landed. Now we are feeding animals with an AI. This is what is so absolutely fantastic about the industry, there is a new frontier to be pushed further every day.

Fortunately makes up for all the crap you have to hear because everything that has the slightest bit of math inside is called AI these days. I would like to reverse that saying: It is only called AI as long as it does not work. As soon as it works, it gets a real name like “facial recognition” ???? 

So you’ve been talking about some very real and honest stuff regarding machine reasoning… what’s this all about?  

In the area of AI, we have made one mistake since 1954 now. Whenever we found a new algorithm or were finally able to actually apply an algorithm we found a long time ago, we declared this algorithm to be the one solution to everything. This one-size-fits-all approach is not only stupid but lead to AI winters which were periods in research and commerce when no one would touch AI with surgical gloves – except for the crazy ones of course; did I mention 1995 was during such an AI winter? We are making exactly the same mistake again, by declaring deep-learning equivalent to AI. One algorithm will not be a solution for everything, it is a solution for a defined set of problems. This means it will fail miserably at other problems and also have clear limitations in the scope it was originally built for. Let’s stick with machine learning for a bit. The clear limitation is data. At some point, there will not be enough data to describe “now”.

I believe there are three basic sets of algorithms to be considered when building a general AI:

  1. Machine learning.  The ability to learn to recognize patterns and associate positive actions with them. This is like evolution, everything that behaves favorably survives, everything else dies. Adding a temporal memory to this system was what started deep learning with long-term-short-term-memory networks. But there are many other learning algorithms.
    In biology, we call this “instinct” and all species have them.
  2. Natural Language Processing.  Now this is where it gets tricky, because language has so much compression. Think of how many different pictures you can imagine on the 3-byte input of “cat”. Your implicit knowledge of context and an internal argument narrows what you understand when you hear “cat” in a conversation down to a very likely correct interpretation. Machines, unfortunately, do not understand anything, and they also lack all the context. This is why NLP is still one of the hardest parts of AI. There is no way to “learn” the meaning of language through machine learning (yet), especially because the context, is so volatile. This is why the hype around chat-bots has lead to a lot of disappointed customers. They work well if you can predict the dialog with a high degree of certainty, for example, if you offer a telephone line where people can call in sick you know that there are only so many ways to say “I am not feeling well” and the only result you want out of the dialog is “when will you be back”. But all other more general cases are very difficult. This is why you have to change your language structure quite drastically if you want Alexa, Google Assistant et al to do anything for you. There are a lot of very advanced algorithms in this area which are mainly very advanced statistics to create probable context, probable synonym, probable XYZ and then math this to a pre-determined understanding structure. These are the least self-reliant algorithms in the AI family.
    In biology, language was the single differentiating factor that made us as a species outperformed everything else on the planet. We no longer had to go through long cycles of observation as well as trial and error to have only the ones with the right solution to a problem survive. We could simply tell each other “if you see a Tiger, run away” and no evolutionary iteration was needed. This is a huge advantage which machines are completely missing.
  3. Machine Reasoning.  The one you were actually asking about. This was how AI started. The idea was to make a logical argument to find a solution to a given problem. The first attempt was to use decision trees to “write down the one and only answer for every situation”. This does obviously not work, because the more interesting a problem is the more different ways of reaching a solution there are. The industry moved from decision trees to decision graphs. Then we found out that logic does not govern the world and that ambiguity, contradictions, overlapping information, wrong information and unexpected events have a huge influence on how to really solve a problem. The type of algorithms that create a solution by outputting a step-by-step execution instruction for a more complex task by choosing the best step to take out of an existing pool of options and then the next and so on are called machine reasoning. The limitations of these algorithms used to be in the knowledge base because the maintenance effort of such knowledge bases grew exponentially and the benefit grew polynomially.
    In the world of biology, this is called “imagination” of if we want to be less philosophic the ability to simulate a bit of the future in our heads to make the right choices of what to do in order to reach a defined goal.

Looking at only one algorithm set to solve all problems in the world seems dumb, yet if you read about AI, it seems that machine learning has become synonymous with AI. This literally guarantees a bursting bubble once the limits of data availability are reached. My prediction is early 2019.

We have set out to combine these algorithms to produce a single engine with a single data pool to mitigate these problems and this is why we started in IT automation and are expanding to more and more complex automation across all kind of different industries.

And you’ve been quite poignant regarding your views on AI actually substituting human intelligence and how unrealistic a “singularity” is – can you share some of your candid thoughts here with our readers?  

The entire debate about singularity does not make sense, Phil. We pretend that simply by rebuilding the electrical part of the brain we get a self-conscious self-reliant entity, why should that happen? If you build the skeleton of a dinosaur you don’t get a dinosaur either.

Ok, to put this down in numbers. A large neural network used in deep learning has about a million nodes today. Is uses up the power of half a powerplant. An average human brain has 84 billion neurons and uses 20 Watts. According to Moor’s law, we can achieve rebuilding this by 2019 and I am one of the guys who believes that Moor’s law will hold. Yet, that is not all there is to the brain. The brain also has a chemical system creating a literally infinite number of configuration of the brain’s 84 billion neurons. Infinite because the chemical system is completely analogue. And then for good measure, there are a lot of well-reviewed research papers arguing that the brain also must have a quantum mechanical system injecting probabilities. So there are two entire dimensions we are missing before we can really reproduce a brain-like structure.

And even if we could… We don’t understand or know what consciousness and self-awareness are, do we. So how do we think we can build it? Is “by accident” really a good explanation? “Build the field and it will come” is definitely not the answer here. This is why all the talk about killer AIs and ethical machines is far too early. I am not saying there will never be a super-intelligent AI, but not in the new future.

That does not mean that current AI technologies cannot outperform us at tasks we have already mastered. Tasks that we as humans already have the experience for and thus tasks we can transfer to the machine. But why would we mind? A crane is outperforming my weight-lifting ability everyday and I think that is perfect, I have absolutely no desire to become a crane, do you?

What will AI truly evolve into over the next 10-15 years, based on your experience of the last two decades?  Is there any real reason why change will accelerate so fast?  Are just getting caught up in our own hype?

What will happen is that automation leaves the constraints of standardization and consolidation. With AI systems based on today’s tech, we can automate tasks, even if they only occur once and even if they have never been posed like this before.

I think I was a bit too abstract here. I believe that AI will make any process that we have mastered and that is not entirely based on language autonomous. Machines will most likely do 80% of what we are doing today. Which means that our established companies get a fair opportunity to catch up with the tech giants. This is why I believe that we need RPA as a transition technology, because it basically puts an API to everything that there is in the corporate world. On top of that, we can use AI to automate almost everything allowing every enterprise the wiggle-room to actually evolve 

So what’s your advice to business and IT professionals today, Chris – how can we advance our career as this intelligent automation revolution takes hold?

I think in IT we are in a unique position. What click-data was for commerce, IT ops data is for the enterprise. IT ops data describe everything a company is doing and thus forms the foundation of applying the next generation of automation and autonomy.

The only thing we as IT professionals really have to do is open our minds. If we do so, we can revolutionize much of the business and not be the “laggards” who are slowing everything down as we were in the ecommerce revolution.  You know I am German, so I get to be blunt: I think we have to “grow a pair” and take on the risk of automating everything from IT, otherwise business will do it for us and then who needs IT?

And finally… if you were made the Emperor of AI for one week and you could make one change to mankind, what would it be, Chris?

Mankind? That is too big for me… It would have nothing to do with AI, I would force people to think rationally for at least 50% of the day instead of 0.5, but let’s not go that far or people will think I am a cynic.

Let’s say I was made king of AI in the enterprise world for one day. I would decree to stop every POC, POV, Pilot, or whatever other terms you can find for trying to be half-pregnant and force people to start doing things in production right away. There simply will not be enough speed if we keep on “trying”. As master Yoda said, “Do or do not, there is no try”. We really need to adopt this behavior pattern.

Thanks for your time today, Chris.  Am looking forward to sharing this discussion with our community.

Posted in : Cognitive Computing, intelligent-automation

Comment10

10 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. Thoroughly enjoyable – the straight talk and simple explanation of complex. Especially the last one of what he would do if made king of AI for a week. Great interview, Phil.

  2. Thoroughly enjoyable – the straight talk and simple explanation of the complex. Especially the last one of what he would do if made king of AI for a week. Great interview, Phil.

  3. Everyone to its own, for me the essence of AI is ‘Probability’. And thus what ever Chris boos is saying is ‘Probably’ right, However I feel Machine Learning is the Core for AI, if you break down NLP to its essence it also is ML – algorithms learnt to recognise what sounds mean what letters or words. and what Chris calls reasoning too is a recursive application of what the machine has learnt over time.
    As far singularity is concerned, IT IS INEVITABLE. – But Then I am ‘Probably’ wrong 🙂

  4. @Shailendra – How can we replicate consciousness and self-awareness in computer code when we don’t know what these are? We can only compute what we already know / have experienced… everything about AI has to be pre-defined. Perhaps, in the long-term future, quantum mechanics may help us reconstruct the mysteries of the self-aware human brain, but there is no evidence yet that this has made genuine progress.

    PF

  5. Never ceases to amaze me that most businesses, (and cost) are run on a set of rules yet I hear organizations think they need AI now. If we focused applying automation on rules, there is probably 50 to 80 % productivity to be had, right now and with technology we all understand.

    (BTW – (I define “automation” now as the automation of computerisation)

    AI wont fix the main automation problem. Complimentary? Yes. And very applicable to certain industries, interactions, channels and analytics but, ladies and gents, can we fix what’s broke first… AI will be a lot easier to implement and scale If we do…

  6. Interesting sound bytes from Chris..Thanks for sharing Phil.

    Very soon “AI ” will be rechristened to NI – Natural Intelligence or SNI – Super Natural Intelligence. It’s just waiting for the inflection point wherein the Developed Alogrithms get the speed of computing to make split second decisions that human mind take !!

Continue Reading