Monthly Archives: Jun 2018

You just can't lose... with Chris Boos. Time for an AI reality check

June 22, 2018 | Phil Fersht

There aren't too many people you can listen to today where you feel all those sticky layers of hype just fall away from your brain, as this guy actually knows what he's talking about and (as we English love to put it) he just doesn't mince his words. So, after a terrific meeting with Hans-Christian (Chris) Boos, Founder, and CEO of leading AI platform vendor arago, I pinned him down to share some of his views with the HfS crowd...

Phil Fersht (Founder and CEO, HfS Research): Chris - you've been a terrific guy who adds so much energy and colour to the intelligent automation industry... but can you shed a little light on your story?  How did you find yourself setting up the business in 1995?  Was the focus on intelligent automation back then?  I thought we were all going nuts about ebusiness!

Chris Boos (Founder and CEO, Arago):  Phil - I originally wanted to do AI research at a university and then I saw how slow academic research is today with the way it is financed. I chose to do it inside a company instead. We could control the pace there. We setup arago to research general AI and my belief has always been that general AI is all about automation. If it is intelligence – even the quite boring artificial version – I guess you could say that smart automation was my goal, then.

Most people are surprised about the research phase. But if you look at most people who are doing significant work in AI they all plan or have done a roughly 20-year research phase. The one thing that is special about arago is that we financed it ourselves. We split the company down the middle, half was doing research and the other half was doing projects to get the money. That way we did not only get to do basic work on AI and make money to finance the work, but we also had a testbed for all our components in real businesses. A brilliant idea I cannot take credit for, it was my uncle who founded arago together with me who came up with the model. It worked out really well for us. We did pure research from 1995-2008, then used our own toolset to start automating IT operations and slowly turn it into a product and do some deployments till 2014, then scaled automating IT operations on top of more static approaches while collecting a dataset that is descriptive of all kinds of companies in all kinds of industries and 2017 we finally started applying AI generally in other industries and processes than IT.

By the way, 1995 was before the e-business boom started. I remember we did the first online banking on the web in Europe then and the page said, “your browser should support tables”. Can you still remember these days?

Did you ever expect to be where you are today? 

Absolutely not. I am still being surprised every day. If you had asked me about feeding animals with an AI a year ago, I would have looked at you like you just tole me the aliens had landed. Now we are feeding animals with an AI. This is what is so absolutely fantastic about the industry, there is a new frontier to be pushed further every day.

Fortunately makes up for all the crap you have to hear because everything that has the slightest bit of math inside is called AI these days. I would like to reverse that saying: It is only called AI as long as it does not work. As soon as it works, it gets a real name like “facial recognition” ???? 

So you've been talking about some very real and honest stuff regarding machine reasoning... what's this all about?  

In the area of AI, we have made one mistake since 1954 now. Whenever we found a new algorithm or were finally able to actually apply an algorithm we found a long time ago, we declared this algorithm to be the one solution to everything. This one-size-fits-all approach is not only stupid but lead to AI winters which were periods in research and commerce when no one would touch AI with surgical gloves – except for the crazy ones of course; did I mention 1995 was during such an AI winter? We are making exactly the same mistake again, by declaring deep-learning equivalent to AI. One algorithm will not be a solution for everything, it is a solution for a defined set of problems. This means it will fail miserably at other problems and also have clear limitations in the scope it was originally built for. Let’s stick with machine learning for a bit. The clear limitation is data. At some point, there will not be enough data to describe “now”.

I believe there are three basic sets of algorithms to be considered when building a general AI:

  1. Machine learning.  The ability to learn to recognize patterns and associate positive actions with them. This is like evolution, everything that behaves favorably survives, everything else dies. Adding a temporal memory to this system was what started deep learning with long-term-short-term-memory networks. But there are many other learning algorithms.
    In biology, we call this “instinct” and all species have them.
  2. Natural Language Processing.  Now this is where it gets tricky, because language has so much compression. Think of how many different pictures you can imagine on the 3-byte input of “cat”. Your implicit knowledge of context and an internal argument narrows what you understand when you hear “cat” in a conversation down to a very likely correct interpretation. Machines, unfortunately, do not understand anything, and they also lack all the context. This is why NLP is still one of the hardest parts of AI. There is no way to “learn” the meaning of language through machine learning (yet), especially because the context, is so volatile. This is why the hype around chat-bots has lead to a lot of disappointed customers. They work well if you can predict the dialog with a high degree of certainty, for example, if you offer a telephone line where people can call in sick you know that there are only so many ways to say “I am not feeling well” and the only result you want out of the dialog is “when will you be back”. But all other more general cases are very difficult. This is why you have to change your language structure quite drastically if you want Alexa, Google Assistant et al to do anything for you. There are a lot of very advanced algorithms in this area which are mainly very advanced statistics to create probable context, probable synonym, probable XYZ and then math this to a pre-determined understanding structure. These are the least self-reliant algorithms in the AI family.
    In biology, language was the single differentiating factor that made us as a species outperformed everything else on the planet. We no longer had to go through long cycles of observation as well as trial and error to have only the ones with the right solution to a problem survive. We could simply tell each other “if you see a Tiger, run away” and no evolutionary iteration was needed. This is a huge advantage which machines are completely missing.
  3. Machine Reasoning.  The one you were actually asking about. This was how AI started. The idea was to make a logical argument to find a solution to a given problem. The first attempt was to use decision trees to “write down the one and only answer for every situation”. This does obviously not work, because the more interesting a problem is the more different ways of reaching a solution there are. The industry moved from decision trees to decision graphs. Then we found out that logic does not govern the world and that ambiguity, contradictions, overlapping information, wrong information and unexpected events have a huge influence on how to really solve a problem. The type of algorithms that create a solution by outputting a step-by-step execution instruction for a more complex task by choosing the best step to take out of an existing pool of options and then the next and so on are called machine reasoning. The limitations of these algorithms used to be in the knowledge base because the maintenance effort of such knowledge bases grew exponentially and the benefit grew polynomially.
    In the world of biology, this is called “imagination” of if we want to be less philosophic the ability to simulate a bit of the future in our heads to make the right choices of what to do in order to reach a defined goal.

Looking at only one algorithm set to solve all problems in the world seems dumb, yet if you read about AI, it seems that machine learning has become synonymous with AI. This literally guarantees a bursting bubble once the limits of data availability are reached. My prediction is early 2019.

We have set out to combine these algorithms to produce a single engine with a single data pool to mitigate these problems and this is why we started in IT automation and are expanding to more and more complex automation across all kind of different industries.

And you've been quite poignant regarding your views on AI actually substituting human intelligence and how unrealistic a "singularity" is - can you share some of your candid thoughts here with our readers?  

The entire debate about singularity does not make sense, Phil. We pretend that simply by rebuilding the electrical part of the brain we get a self-conscious self-reliant entity, why should that happen? If you build the skeleton of a dinosaur you don’t get a dinosaur either.

Ok, to put this down in numbers. A large neural network used in deep learning has about a million nodes today. Is uses up the power of half a powerplant. An average human brain has 84 billion neurons and uses 20 Watts. According to Moor’s law, we can achieve rebuilding this by 2019 and I am one of the guys who believes that Moor’s law will hold. Yet, that is not all there is to the brain. The brain also has a chemical system creating a literally infinite number of configuration of the brain’s 84 billion neurons. Infinite because the chemical system is completely analogue. And then for good measure, there are a lot of well-reviewed research papers arguing that the brain also must have a quantum mechanical system injecting probabilities. So there are two entire dimensions we are missing before we can really reproduce a brain-like structure.

And even if we could… We don’t understand or know what consciousness and self-awareness are, do we. So how do we think we can build it? Is “by accident” really a good explanation? “Build the field and it will come” is definitely not the answer here. This is why all the talk about killer AIs and ethical machines is far too early. I am not saying there will never be a super-intelligent AI, but not in the new future.

That does not mean that current AI technologies cannot outperform us at tasks we have already mastered. Tasks that we as humans already have the experience for and thus tasks we can transfer to the machine. But why would we mind? A crane is outperforming my weight-lifting ability everyday and I think that is perfect, I have absolutely no desire to become a crane, do you?

What will AI truly evolve into over the next 10-15 years, based on your experience of the last two decades?  Is there any real reason why change will accelerate so fast?  Are just getting caught up in our own hype?

What will happen is that automation leaves the constraints of standardization and consolidation. With AI systems based on today’s tech, we can automate tasks, even if they only occur once and even if they have never been posed like this before.

I think I was a bit too abstract here. I believe that AI will make any process that we have mastered and that is not entirely based on language autonomous. Machines will most likely do 80% of what we are doing today. Which means that our established companies get a fair opportunity to catch up with the tech giants. This is why I believe that we need RPA as a transition technology, because it basically puts an API to everything that there is in the corporate world. On top of that, we can use AI to automate almost everything allowing every enterprise the wiggle-room to actually evolve 

So what's your advice to business and IT professionals today, Chris - how can we advance our career as this intelligent automation revolution takes hold?

I think in IT we are in a unique position. What click-data was for commerce, IT ops data is for the enterprise. IT ops data describe everything a company is doing and thus forms the foundation of applying the next generation of automation and autonomy.

The only thing we as IT professionals really have to do is open our minds. If we do so, we can revolutionize much of the business and not be the “laggards” who are slowing everything down as we were in the ecommerce revolution.  You know I am German, so I get to be blunt: I think we have to “grow a pair” and take on the risk of automating everything from IT, otherwise business will do it for us and then who needs IT?

And finally... if you were made the Emperor of AI for one week and you could make one change to mankind, what would it be, Chris?

Mankind? That is too big for me… It would have nothing to do with AI, I would force people to think rationally for at least 50% of the day instead of 0.5, but let’s not go that far or people will think I am a cynic.

Let’s say I was made king of AI in the enterprise world for one day. I would decree to stop every POC, POV, Pilot, or whatever other terms you can find for trying to be half-pregnant and force people to start doing things in production right away. There simply will not be enough speed if we keep on “trying”. As master Yoda said, “Do or do not, there is no try”. We really need to adopt this behavior pattern.

Thanks for your time today, Chris.  Am looking forward to sharing this discussion with our community.

Posted in: Cognitive ComputingIntelligent Automation

8

1 Comments

Accenture, IBM, Cognizant, Infosys, Wipro and TCS lead the first Digital OneOffice Blueprint

June 10, 2018 | Phil FershtMelissa O'BrienAnirudh PillalaSaurabh Gupta

Digital is all about an organization's ability to respond to the needs of their customers as those needs happen - or even be smart enough to anticipate those needs before they happen. This is all enabled by interactive technologies to create those touchless interfaces with the customers.  Smart analytics and AI enable organizations to anticipate these needs based on the ability to recognize patterns and inferences over time, but nothing can really substitute for human intelligence to bring customers, suppliers and employees closer together, unimpeded by frustrating silos and legacy processes. 

Remember, every broken process chain, or poorly converged dataset, slows down an organization's ability to do business in real-time and stay ahead of its market.  Traditional barriers between front, middle and back offices hinder the true ability of companies to operate in this real-time, responsive and anticipatory digital fashion, which is why we coined the term "OneOffice", where the unification of digital business models, intelligent automation, analytics and creative talent is happening before our very eyes.

The HfS Digital OneOffice Framework (see below) describes how organizations must integrate their digital customer interfaces with their operations in order to fulfill and anticipate their customers' needs. It is the organizational end-state to survive and succeed in a world where digitized processes dictate how responsive, agile, cost-effective, predictive and intelligent firms have to be to stay competitive.  

To this end, we have delved deep into all the four dimensions of the Digital OneOffice, and conducted deep analyst discussion to aggregate service provider performance at delivering the sum of the Digital OneOffice parts:  

  1. Digitally driven front office
  2. Digital underbelly
  3. Intelligent digital support functions
  4. Predictive digital insights

HfS Premium subscribers can click here to access their full copy of the 2018 Blueprint Report: Digital OneOffice Services

Click to Enlarge

So how did the Winner's Circle service providers fair?

Accenture

Strengths

  • Well-rounded portfolio across OneOffice: Accenture has the best performance overall across the OneOffice portfolio, and a breadth of industry expertise to complement it. Accenture placed in the Winners' Circle for each of the Blueprint studies used to compile this OneOffice assessment.
  • Strong marketing operations capabilities to support integrated digital OneOffice offerings.  Accenture has 16,000 business-focused staff dedicated to delivering digital marketing assignments - a considerable asset that goes well beyond the firm's IT delivery.
  • Strong intelligent automation capabilities. Acquisition of GenFour and exciting partnerships, with significant investments, with the likes of Automation Anywhere, Blue Prism and IPSoft.
  • Winning with thought leadership: Accenture is well-known as a thought leader across many of the change agents as well as within individual industries. 
  • C-Suite relationships beyond IT.  Digital business and intelligent automation decisions are largely being driven by both IT and business C-Suite executives in the Global 2000.  Accenture has the combination of strategic relationships outside of IT, in addition to the managed services execution. 
  • Leveraging creative assets for CX and UX design: Accenture has developed an industry-leading focus on becoming a customer experience expert, as evidenced by its 30+ design agency assets, by the broadest portfolio of digital design assets in the services industry (click here for a full list of digital M&A in services.)

Challenges

  • Size can work in its disfavor: Its size and success have given Accenture a reputation as a premium, high cost, and less responsive organization. In particular, for smaller companies, just this perception in the market can steer buyers instead toward more niche specialized agencies and the attention, flexibility, and experience they receive from a smaller provider.
  • Finding the right culture balance: Accenture is well known for its results-driven, traditional consultancy culture, which will need to be balanced out or effectively blended with the more left-brain focused acquisitions in order to retain creative talent and remain generally effective.
  • Proving to the industry it can deliver the end-to-end Digital OneOffice portfolio: There is no doubt that Accenture can pick up strategic work and execute for clients, but being able to demonstrate to the industry it can deliver both the strategic design integrated with complex operational delivery - at scale - is still in its infancy.  Many of its competitors will fight hard for execution work where Accenture is delivering the high-end design and consulting. It needs to demonstrate the "one-stop OneOffice shop" is where it wins.

IBM

Strengths

  • Strong intelligent OneOffice offering: Market leading capabilities to drive the OneOffice underbelly (automation, security, cloudification) and neural networks (AI, smart analytics, blockchain, and IoT). Impressive development of credible global automation capability and several notable early wins.
  • Portfolio breadth: End-to-end and scaled IT and business process services across front, middle, and back-office.
  • Horizon 4 investments: Very strong investments and IP in horizon 4 (and beyond) technologies that will shape the future (e.g., Quantum Computing).
  • Design Thinking: Has made some considerable investments in recent years, but needs to align more aggressively with OneOffice approach
  • Watson: The analytics/cognitive powerhouse has a significant role to play as a cognitive virtual agent, an analytics resource that has huge scalabiity and a long-term investment area for firms with deep interests in their cognitive capabilities.

Challenges

  • Size can be a disadvantage: IBM is a large and complex organization, which makes it hard to seamlessly deliver all that it has to offer.
  • Translating tech to business outcomes: IBM is often perceived as a technology powerhouse, but one lacking the business translation and context to successfully apply emerging technologies.
  • Agility: Lacks the nimbleness and flexibility of smaller players.
  • Focus on cognitive may impede its ability to compete for design-focused end-to-end deals:  IBM has substantial credibility to drive analytics-driven, cognitive/automation projects, but its lesser focus (over the last couple of years) on true digital design may see it lose out to firms such as Accenture and Cognizant, where digital is firmly established at their core.

Cognizant

Read More »

Posted in: Digital TransformationDigital OneOffice

2

1 Comments

To keep receiving HfS updates, make sure you register now!

June 10, 2018 | Phil Fersht

Still enjoying life now GDPR's cleaned up your inbox, but now realize HfS is the one you just cannot live without?

Let's be honest, you probably do need to keep up-to-date with the finest change-agent research on RPA, blockchain, AI, and much more, right? Then you really must register here to receive HfS' content, or update your email subscription to keep receiving us.

Posted in: Digital TransformationDigital OneOfficeIntelligent Automation

0

0 Comments

And time for a real Infosys Saliloquy...

June 07, 2018 | Phil Fersht

Salil Parekh, recently appointed CEO and Managing Director for Infosys took some time out of his busy schedule during his client partner conference to catch up with me to talk about his vision for all things Infosys and the future of services…

Phil Fersht, CEO and Chief Analyst, HFS Research: Welcome to your first HfS interview Salil! Maybe you could take us a little bit back to your early career. When did you get the appetite to lead one of the largest IT services firms in the world? You know, was this something you always wanted to do? Was this planned, or have you always been an opportunist?

Salil Parekh, CEO, Infosys: Thank you, Phil, this was quite an un-planned scenario for me. So, maybe when I finished with Engineering, a Master’s in Computer Science, and I was working with a consulting firm for years. Then we got acquired by a consulting and tech company, so I’d basically been in the same company for 25 years. And then this opportunity showed up a few months ago. It’s a tremendous privilege to have this opportunity. It’s one of those things you dream about, in your career, as you sort of think, ‘Maybe it’s possible,’ but when it happened, at least, for me, it was completely unplanned. So I’m delighted to be here, I wish I could plan such things, but I can’t [laughter].

Phil: So, how would you compare this new Infy experience with Capgemini, you know, both global services powerhouses, one with a Parisian epicentre, the other one Bangalorian, so – what haves been your observations?

Salil: Well, I think, Cap’s a fantastic company. I think I would focus much more on the strengths

Read More »

Posted in: Buyers' Sourcing Best PracticesDigital Transformation

0

0 Comments

Finally the industry has credible RPA product benchmarks from 359 superusers

June 01, 2018 | Phil Fersht

As am sure most of you noticed, HfS quietly released the most comprehensive customer satisfaction benchmarking of the 10 leading RPA solutions, authored by Saurabh Gupta, myself and Maria Terekhova.  We covered 359 super users of RPA products (enterprises, advisors and service providers) across 40+ customer experience dimensions across the following 6 key dimensions: 

  1. Features and functionality
  2. Integration and support
  3. Security and compliance
  4. Flexibility and scalability
  5. Embedding intelligence
  6. Achieving business outcomes

As an example, here is how dimension 6, "Business Outcomes" came out looking across the products:

So why did we undertake this research?

Our industry is plagued by many consultants with limited depth in RPA, who have no access to product level data that supports the tough decisions facing enterprises. In addition, most analysts deliver these 2 x 2 matrices which offer very limited insight or value (and all look remarkably similar). It’s time to dispel myths and provide enterprises with unbiased, credible and highly statistically significant data. The HfS RPA customer experience benchmarks are designed to help enterprises with RPA product selection as they formulate their intelligent automation roadmaps.  

It's more than a report... it's an online RPA decision-support tool

In addition to the report, HfS is also launching an online RPA decision-support tool for enterprises to enable client-specific due diligence on RPA providers. This tool will allow HfS clients to customize the decision criteria and associated weights from the available 40+ customer experience dimensions. It will provide clients a customized report detailing the top three RPA products that the client should consider, based on the rich insights that HfS collected as a part of the RPA study. HfS analysts are also supporting RPA clients through collaborative ThinkTank sessions, half-day workshops designed to problem-solve and validate strategies. These ThinkTanks go beyond the data where HfS analysts can share HfS IP, perspectives, and experiences on RPA tool selection, best practices, and common pitfalls to avoid.

So take time to delve into the realities of RPA and some of the findings may just surprise you

The industry is still struggling to solve challenges around the process, change, talent, training, infrastructure, security, and governance. Our mission at HfS is to dispel this confusion and uncover the truth to successful RPA deployment. It's time to separate the hype and propaganda from reality - and here is the reality!

Premium HfS subscribers can access the HfS Benchmarking Report: Detailed Assessment of the 10 Leading RPA Products here

Posted in: Robotic Process Automation

0

0 Comments