I plugged my iPhone into my new (fuel-emission friendly) VW this week and – for the first time – my car was connected to by digital life. Siri (finally) came alive and started sending my contacts voice to text messages, my favorite Spotify soundtrack was arranging itself in all its glory on my vehicle dashboard, and I didn’t have to worry about tuning radio stations, pairing devices that barely talked to each other, or getting stuck using some horrible proprietary technology my previous car had forced me to use, or those awful attempts at being “appy” from the cable TV providers that look nice, but require months of frustration to figure out.
My car was finally seamlessly connected with my personal apps that run my life, and my suicidal urge to text and drive has been cured by Siri finally doing it for me! While it’s been pretty cool to program the air-con using a mobile app or have automated replenishment of new coffee capsules… being able to take your digital life into your moving vehicle is what IoT is all about. It’s high-time to get past the buzz about IoT being bigger than IT itself – it’s really about sensors, data and most importantly what we can do with this data, and how we can create digital experiences outside of our traditional mobile and laptop screens.
So, without further ado, let’s take a look at the 2017 landscape for IoT service providers and have a chat with report co-author and manufacturing-engineering analyst guru himself, Pareekh Jain, about the emerging landscape for IoT services…
Phil Fersht, Chief Analyst and CEO, HfS: Pareekh, how do you see the IoT market evolving and what are the key IoT trends you have been observing?
Pareekh Jain, Research Vice President, HfS: Phil, the current state of IoT revolves around sensors and data collection and its use in sub-process or process optimization, but there is not enough visible thought or action by IoT service providers in exploiting the potential of data for the business reimagination of the Digital OneOfficeTM. Take the example of Amazon Go – the concept store where there will be no checkout queues (seriously). Shoppers can pick… and just go. The combination of IoT with artificial intelligence and machine vision is what makes Amazon GO possible. This is just one of the business reimagination possibilities of IoT, where these true digital experiences come alive, and we’re finding this kind of conversation depressingly absent in our discussions with some of the service providers.
Having said that, we do see real progress with the foundations of IoT over the last couple of years and are observing five key trends in our IoT research.
1) IoT is for real, but is limited in scale and scope at present. We found many examples of PoCs and actual customer engagements. The customer engagements are small and limited in scope to a couple of business or geographical units. The organization-wide IoT strategy and implementations examples are rare.
2) IoT update is pervasive and use cases are cropping up across all industry sectors. The highest number of IoT examples we have seen are in manufacturing or Industrial IoT, smart cities, and connected cars.
3) Efficiency or cost optimization are the major drivers in IoT projects at present. This is probably to chase the low-hanging fruit and develop business cases for organization-wide IoT implementations.
4) IoT is too industrial and The role of end consumer in IoT is under-appreciated. The majority of current IoT projects are in B2B areas and smart cities, and advantages of IoT to the end consumer are too often a mere afterthought.
5) There is a lack of discussion on the role of algorithms in IoT. The majority of discussion is focused on the use of sensors, connectivity, data collection, data storage and dashboards. People think that analytics is a known capability and, once they have IoT data, they can analyze it in the similar way that they have been doing with their big data analytics. But there are differences in IoT data, namely the formats of data (be it a picture, video, temperature record, etc.), and frequency or volume of data (for example, one can have a continuous stream of temperature recording at every second). Some insights from IoT data can be derived by leveraging existing algorithms, but the broader IoT value will only be nurtured when the enterprise clients and their service providers create new IoT-specific algorithms.
Phil: Why is IoT adoption limited untill now, Pareekh. What are some of the challenges enterprises are facing in their IoT strategy and implementation?
Pareekh: The IoT services offerings are still evolving, and this sector is facing several roadblocks from enterprise clients, service providers, and technology evolution. We are observing four challenges faced by enterprises that are stunting adoption:
1) Enterprise technology complexity is inhibiting IoT proliferation. As older enterprise systems do not have the capacity to handle so much data flow, legacy stacks need to be modernized. Thus, a lot of investment commitment is needed, restricting the scope of IoT implementation. Also, for organizations, greenfield IoT implementations are very rare, so service providers need to carefully select API, SDK, and gateways for easy integration of IoT solutions with other enterprise applications.
2) The lack of alignment of IoT with other enterprise digital and transformation initiatives is limiting IoT’s value potential. The value of IoT is much more powerful when it is combined with other technologies, such as artificial intelligence, machine vision, intelligent automation, etc. as discussed earlier in Amazon Go example. As IoT initiatives must be aligned with the broader digital strategies of the organizations, planning is very important for that purpose. Service providers need to get involved from the digital strategy formulation stage of clients and have to be very strong in IoT consulting.
3) Fragmentation of IoT platforms and IoT standards are creating interoperability issues. At present, 300+ IoT platforms are available, hence service providers are facing challenges developing platform-specific solutions for easy integration. We expect there will be major consolidation in the IoT platform space, similar to what happened in the ERP market in the last couple of decades, but, for now, the major challenge for the service providers is to identify the right IoT platforms they can invest in.
4) Lastly, data security is becoming a major concern for IoT services. Service providers are building capabilities to address the security concerns in IoT, but this takes time to prove the security robustness of solutions.
Phil: And how did the IoT Blueprint analysis turn out?
Pareekh: This Blueprint analysis was interesting and we evaluated 18 service providers for this study.
The As-a-Service Winners are service providers that are being entrepreneurial alongside their clients and innovating in IoT activities, building new markets, paradigms, digital systems, and data flows around a breadth of different types of connected devices. The service providers included in this quadrant are Accenture, Atos, Cognizant, EPAM, Harman, IBM, HCL, TCS and Tech Mahindra. Accenture leads the pack due to its strong consulting base and execution scale in different industry verticals. Atos has a strong client base in Europe and good manufacturing and analytics capability. HARMAN is a mature player in industrial IoT, with a strong vertical focus, including automotive. Now with Samsung’s acquisition of HARMAN, its IoT capabilities will be augmented further. Cognizant has a very solid experience with its clients’ digital journeys and imparts Design Thinking in IoT implementation. EPAM has a good breadth of IoT portfolio and has a full stack approach. A strong partnership environment, engineering heritage, and IoT implementation capability are key strengths of HCL in IoT implementation. IBM is focusing on delivering cognitive IoT solutions and has a broad array of platforms and tools for IoT offerings, with real potential in develop IoT specific algorithms. TCS has deep competencies in multiple industry verticals, possessing the talent and skills to manage large-scale IoT projects. Tech Mahindra leverages its manufacturing heritage in IoT and has a strong implementation record.
High Performers are service providers that execute well around older Industrial M2M models, and are exploring and investing in newer IoT. The capabilities are also related to evolving demonstration of more traction with clients in defining and delivering at scale against business outcomes and co-innovation. The service providers included in this quadrant are Dell, Infosys, NTT DATA, NIIT Technologies, Syntel, and Tieto. Infosys has a strong IoT capability in multiple industry verticals and also has a deep focus on IoT security related features. Dell Services is strong in IoT infrastructure area, and NTT DATA can become a major global player in IoT with integrating the Dell Services offerings into its portfolio. NIIT Technologies is very focused on IoT solutions mainly in the travel and insurance industries and has the capability to scale up the IoT offerings in other verticals. Syntel has strong capability in IoT integration area and can capitalize its IT client relationships. Tieto is co-investing with customers in IoT and has a great portfolio of verticals in the Scandinavian region.
Execution Powerhouses are service providers which have deep pedigrees and competencies in Industrial Internet work on a global scale, with tremendous resources but have to catch up in modern IoT innovation and entrepreneurship. Genpact and Luxoft reside in this quadrant. Luxoft is very strong in automotive & BFSI domain and has a good delivery presence in both Europe and North America region. On the other hand, Genpact has strong partner ecosystem in manufacturing with the lean digital approach.
High Potential service providers demonstrate vision and strategy in IoT implementation but have yet to gain momentum in the execution of it. VirtusaPolaris has high potential with post-merger growth showing promise through rapid maturation as a modern IoT player.
Phil:So what should we be watching for in the next few years in IoT?
Pareekh: Over next few years, we will observe more IoT adoption. We will be watching six key trends.
1) Mega outsourcing deals in IoT services might become a reality. If IoT has to realize trillions of dollars in value as predicted by many analysts, the billion dollar IoT services outsourcing deals should not be far away. As IoT point solutions become more effective and mature, organizations will go for large scale implementation of IoT.
2) More IoT adoption and application of IoT in other verticals apart from manufacturing. Also, the usage of IoT will expand beyond traditional large companies to customers of different sizes including SMBs and also in different consumer applications.
3) IoT partnerships likely to grow further as ecosphere expands. As IoT services need both the hardware and software components, the IoT service providers will collaborate with both software and hardware providers for the IoT network and connectivity related requirements. Also, service providers will develop APIs for IoT integrations and vertical specific digital plug and play solutions for rapid implementation of IoT solutions.
4) More robust security standards will evolve for data security so that data flow can be communicated easily among IoT and other enterprise applications. IoT platform winners may start emerging, which will give confidence and clarity to both enterprises and service providers.
5) The IoT market will see M&As as the Global and Indian service providers look for specialization and boutique consulting in the IoT space to get rapid access to industry domain and technical capabilities. For example, Samsung has acquired HARMAN recently to expand its IoT portfolio (Read here).
6) Lastly, service providers will integrate their IoT offerings as a part of a broader digital transformation or Digital OneOfficeTM offerings by involving cloud, mobile, analytics, automation and security divisions. The IoT consulting expertise will become a genuine differentiator. With IoT analytics, service providers will develop more predictive and prescriptive IoT analytics capabilities and will talk more about algorithms.
Phil: What are your plans for IoT coverage in 2017?
Pareekh: There are three aspects of our IoT coverage. Firstly, IoT is an important component of Digital OneOfficeTM so we will cover IoT extensively along with other HfS analysts as part of our Digital OneOfficeTM coverage. Secondly, we will segment the IoT market and provide deeper insights into individual segments. Currently, we are doing Industry 4.0 services study which will provide deeper insights into manufacturing IoT market. We plan to cover smart cities IoT and consumer IoT separately in upcoming Blueprints. Finally, IoT is all about the ecosystem and we plan to extend our IoT coverage beyond traditional service providers. We plan to cover IoT platforms, telecom players, hardware players and emerging startups also along with service providers.
Phil: Last question, if you have to sum up the state of IoT in one sentence for our readers, what would you say?
Pareekh: I will say that IoT is for real but not transformational for consumers yet. As you said earlier, we need those real digital experiences coming alive with IoT!
A free copy of the HfS IoT Services Blueprint is available to download from hfsresearch.com for a limited time. Register your account and get your copy!
Pareekh Jain is Research Vice President Engineering Services, IoT, Telecom and Manufacturing at HfS Research. He established the global engineering services practice at HfS Research which covers mechanical engineering services, embedded engineering services, software product engineering services, PLM services, and Industry 4.0. His IoT coverage includes consumer IoT, industrial IoT and smart cities. Click here for his bio.
A company’s security posture changes often. The change can be company-created, for example, by opening an office in a new geography or entering a business with different regulatory requirements for data protection. Security posture also changes as new threats like previously unknown malware emerge, and more sophisticated techniques for hacking evolve.
When engaging a managed security services provider, it’s tempting to believe that keeping up with changing security posture is “being handled” by the provider. But is it?
Providers Often Forgo Innovation For Operating Efficiency
A very common complaint among outsourcing and managed services clients is that the providers rarely suggest changes unless the client brings it up – unless, of course, that change benefits the provider’s ability to run the process. In security environments, this heads-down approach goes beyond ineffective – it can cause significant damage to clients as threats and mitigation options change quickly.
Yes, providers generally do a security posture assessment before beginning the engagement. However, in our current blueprint research we found little evidence that providers re-assess security posture formally during the ongoing engagements.
Recently, in fact, we even heard of one provider that regularly discovered threats in a client environment but didn’t report them to the client because the particular threat types were out of scope of the engagement. The client found out only months later, and by accident, about the omissions.
Even with such egregious scenarios of intentionally not alerting the client, many providers miss threats. They miss them because they’re not looking for them and their analytics engines aren’t detecting new patterns.
Be Proactive With Incident Monitoring And Reporting
There are many ways you can work with your managed security services provider to ensure that changes to your security posture are being addressed. From most quickly implemented to longest, here are some actions you can take:
First and foremost, monitor news and trends in security and threat intelligence. Don’t wait for your provider to flag new threats types to you.
Be proactive in asking questions about changes and new threats. Sometimes even a quick email asking the provider about a new ransomware technique that you read about will spur discussion about making changes to the service scope.
Include security market changes and news as part of monthly meetings. Make it an agenda item to discuss what’s happening in the market. And build into the provider’s mindset not to wait for the regular meetings to bring up new events.
Expand the scope of your engagement to include regular security posture re-assessments. This can depend on your industry and other factors, but it might be quarterly, semi-annual, or annual.
Include a new engagement metric on the provider’s ability to find and address new threats. The provider’s ability to keep your data and organization protected from threats even as those threats change needs to be part of the provider’s success metrics if it isn’t already.
Bottom Line: Don’t let inertia set in on your security managed services engagement—make sure your engagement includes specific, proactive approaches to staying current with your security posture.
We hear a lot about how retailers are trying hard to bridge the online and in-store experience for customers, but have you thought about how this concept can help patients in healthcare? VCU Health, for example, is a forward thinking hospital that is looking outside the hospital walls for how to create a better experience and outcome for stroke patients before they even reach the ER. Partnering with the ambulance authority and technology providers, VCU Health is testing remote assessment of the patient during their ambulance journey to shorten their time to treatment. Led by neurologist Dr. Sherita Chapman Smith, this hospital’s story involves a passion for modern and mobile patient care, a lot of collaboration, and some real outside the box thinking in order to fine-tune and bring the idea to life.
At the heart of the effort is empathy – making an effort to “get inside” the experience of each person involved, understand their needs, and how to address those needs both simply and effectively.
The group that Dr. Chapman Smith gathered to the table included individuals from the local ambulance authority, the VCU Health Telemedicine Center, and technology provider swyMed, to determine what was needed to have a secure and stable system that would work and work well for all users. To get a patient perspective, the hospital reached out to specialty actors who have been trained to act in patient scenarios with medical students and residents, to give feedback on how they should interact with patients. The team trained these patient “stand-ins” on how to act out symptoms for a stroke.
These “patients” were picked up in an ambulance and connected via teleconference to the vascular neurologist in the hospital, who conducted a remote assessment; and when they got to the hospital, the scenario had them quickly advanced to the next stage of treatment. Afterwards, each one shared feedback via survey and interview, such as, did they feel safe, did they feel connected with the neurologist, were they comfortable, what did they think of the audio/visual quality? Participants ranged in age and ability to take into consideration comfort with technology and levels of hearing. The hospital also compared the responses with bedside evaluations. The feedback, combined with the experience from the physicians and EMT has led to proposals for changes to protocol and to the solution.
As the project moves along, they keep zeroing in on what will make the patient comfortable, and whether that works for the physician and EMT in the ambulance.
What makes it work?
Internal and External Network of Active Participation: “It’s a small group of vascular neurologists at VCU,” said Dr. Chapman Smith, “so I just asked my colleagues – can we give this a try?” She talked to her department chair, who connected her to the Chief of Emergency Services Operations and Medical Director of a local EMS agency, and then reached out to the communication office, and then to the ambulatory authority, bringing in representation from groups that all have a stake in how it would work, and how easily, and how smoothly. A small community banded together to test—what will work for the hospital, the patient and the EMT, and provide feedback. They have roles in working through implications to protocol, simulations, and dry runs.
Steady Visual Connection: “We wondered if the patient really needs to see the physician or EMT from within the ambulance,” said Dr. Chapman Smith, “but a main comment from the patient simulators was that it put them at ease to see a face versus just hear a voice… just a voice can add to the anxiety.” So the ambulance clearly needs a steady and secure connection with high enough bandwidth as it makes its way to the hospital. A modem, antennae, and single carrier connection did not do the trick; in test runs, the ambulance encountered multiple dead zones. “We want to be sure wherever we go, we can do the assessment/exam without a drop.” So, as part of the solution under development, swyMed software monitors for connections and can switch cell towers and antennas to get the best quality signal at the lowest bandwidth. It’s part of a portable solution the team developed to keep a live-video connection to a doctor all the way to the medical center.
Ease of Use and Access: During the assessment, the neurologist wants to be able to see the patient, but not have to click arrow keys to move around a camera. Taking this into consideration, the team designed a set of predefined commands such that a command would move the camera to a certain spot to look at an arm or a hand with as few arrow clicks and mouse moves as possible. Also, the physicians and EMTs want a mobile solution: physicians don’t want to be limited by being at a desktop computer; and the EMTs want something that is portable between vehicles, something not every ambulance has to have, since they are not all in service all the time. These insights all came from interviews, observations and dry runs.
There are a number of healthcare providers working inside the walls to create a better and more effective experience for health and care, but what happens before and after that care can have significant impact on outcomes as well. The work that VCU Health is doing is an example of a human-centered, not hospital-centered or technology/telehealth-centered care. The hospital is on a journey—still to finalize the protocols and rollout the remote assessment with real patients—but it’s a worthy example of forward thinking that shows how healthcare providers can step outside the storefront and provide real remote services that can really impact the quality of care.
A memorable exchange I once had with a former HR colleague went like this:
Me: “When Workforce Planning accounts for cascading gaps because you filled some jobs from within, that’s commonly viewed as HR best practice.” Colleague: “Oh really, Well I think best practice is simply the practice that works best!”
Borrowing a line from the classic movie Cool Hand Luke … his statement “helped get my mind right.”
So one suggestion coming out of my initiation into the world of practical HR thinking: Whenever you hear someone say: It’s “HR best practice”, perhaps you should ask if they’re following a blueprint crafted specifically for their organization and business context. And if they’re not, odds are that particular practice will come under some scrutiny soon, and perhaps shortly thereafter, the individual that architected the practice.
Many of us were a bit taken aback when we heard highly regarded Zappos was generously paying new hires to quit if they were dissatisfied, and not just because it was likely deemed more cost-effective in the long run. It was mostly because the company’s brand is totally about “best customer experience imaginable” and this is so much more than a tag line. One of countless examples is that their customer service reps never use scripts. Genius, common sense, or both. You decide, but also think about whether this would work for a phone company. Fat chance as they say.
As With New Employees, Best is Mostly About Fit
Elsewhere, a number of well-known large companies including LinkedIn, Virgin America, Best Buy and Netflix have started experimenting with unlimited paid time off. The rationale: time away from the job helped with employee productivity; e.g., by avoiding burn-out. Beyond that benefit, trusting employees not to take advantage of the company can make them feel – and therefore act — like part owners of the business. This practice worked for these employers, particularly when employees and managers discussed adequate coverage for key duties in their absence, but clearly it’s not a universally great fit. Consider the impact on an impending re-start of a nuclear power plant if even one senior-level nuclear or safety engineer was in urgent need of some downtime. “Adequate coverage” is in the eye of the beholder.
Outside the realm of potential life and death consequences, however, innovative crowd-funding company Kickstarter abandoned its unlimited vacation policy when they thought it was sending some type of message (subliminal?) to employees to take less time off. So a creative HR practice designed to minimize burn-out was actually burning people out!
As in the aforementioned exchange with that colleague, best practice does indeed come down to what works in a particular business context; and when you’re talking about a new HR practice under consideration, desired corporate culture might be the #1 element to focus on. In high-tech startups, a very informal, “we’re one family” culture and typically doling out some equity are used to attract top talent. Arguably it’s also to compensate for a lower salary initially. By way of contrast, when was the last time you saw someone’s canine companion taking a stroll inside a blue-chip investment advisory firm?
Bottom Line: HR practices are “best” when they support both a company’s culture and its workforce strategies designed to create a great customer experience.
Let’s not be wedded to any particular best practice within the HR / HCM domain, as best practices are really tools to effectively manage an ever-changing operating landscape.
Everywhere I turn, service providers are talking about how they’re going to enable services clients to delight end customers. There’s nothing wrong with aspiring to delight customers; in fact it’s an admirable goal. But let’s take a step back and talk about what really matters when it comes to customer experience. Partly due to increased expectations set by our more digital world, customers want and expect things to be simple and easy. When I order an Uber or an Amazon package, it arrives at my door in the time predicted. Am I delighted? Not really. Am I really loyal customer who spends increasingly more money with these companies? Yes.
Full disclosure, we talk about delighting customers in our OneOffice concept of using a customer focus to align business operations. After all, in a customer-centric utopia, smiling, happy, loyal customers are the ultimate goal. But right now, I think it’s time to talk more realistically and focus on the basics. As a customer, I want to get my package on time, my question answered simply and easily.
Here are some service provider promises in marketing materials out there now:
“Elegant creative designs that go beyond average usability to deliver individualized experiences that charm, delight and engage”
“Utilizing the science of data and a unique approach and focus on the art of the possible, (we are) leading the way in designing transformative customer experiences that delight and engage”
“Our vision is to make our customers experience the delight of their customers”
It all sounds wonderful, but let’s get a bit more realistic. Think about the last time you as a customer felt really thrilled by the service you received.
We should take a good look at how we can start preventing bad customer experiences, which have a much greater potential to do business damage than great experiences do to have a positive impact. I recently participated in 4- literally 4- online chat conversations regarding an order that arrived damaged. None of the chats had record of the previous, or of the initial order. It was the most anti-omnichannel experience I’ve ever had. It seems everyone has one or more of these stories. For many companies, there’s a lot of work to be done to improve basic customer service.
Take this as food for thought. Satmetrix, the company which owns NPS (net promoter score) benchmarks customer satisfaction annually, using Net Promoter Score (NPS) which reports that the industry with the highest NPS is retail, with a 58 average. That’s the highest. The lowest is internet service providers at 2. The standard for “world class service”? 75. Even the top-rated customer service companies (i.e. USAA, Nordstrom, Apple) are hardly close.
Aspire to delight, but focus on the results that really matter.
Let’s face it, as much as it’s a cheerful concept and inherently the right thing to do, how does delightful customer service translate to business value? The business goals are to increase loyalty and repeat sales, and reduce churn in retail; lower admissions/re-admissions in hospitals and improve patient health in healthcare; reduce claims leakage in insurance, improve regulatory compliance in BFSI. So, the idea is you want to engage your consumers in order to impact these types of outcomes.
I do believe customer delight exists, and it’s certainly relevant and valuable. But to try and put delight into some systematic, algorithmically programmed process is a waste of time if you don’t have the right design and talent. So, yes, set everything up – connect the front and back end systems so that employees have the information they need – set up digital channels for customers to communicate and then, the most important piece– hire the right people who are empowered to act on it. Make the goal to simplify communication and the ease of doing business to generate loyalty with your customers.
Bottom line: Make life simpler for your customers, and loyalty (and hopefully delight!) will follow
As I said about forgetting omnichannel when your basic customer service sucks, make the customer experience goals about personalizing, consistency and simplicity. If delight follows, fantastic! Look to pivot operations toward OneOffice to become nimbler, more intelligent digital organizations that deliver on customer needs. Now that would be delightful.
In our bid to make our research more visually appealing we would like to post the first version of our IT Services market primer. This shows the first round of the 2017 HfS market size and forecast for IT Services for 2015 to 2021. We will be doing a full update of the forecast at the end of Q1. When we have a chance to analyze all the vendor results for 2016.
This chart gives our top level view of the IT Services market in numbers – the market we focus on here is our high-value market – by this we mean the outsourcing/managed services and professional services markets – we exclude standalone support and training from these numbers.
We will be producing more graphics such as this, the next will be a top line view of the BPO market. We’ll also be writing up our thoughts on both markets in a PoV by the end of the month.
In the post-digital world, no one cares much about “offshore” as a strategy – it has become part of the fabric of managing a global operating model, where operations leaders just tap into whatever global resource they need to achieve their desired outcomes. This doesn’t mean that traditional “offshore” global delivery locations, such as India and the Philippines, are going bust overnight. But it does mean the playing field is leveling out as the need for emerging skills trumps the desire simply to reduce labor costs.
Our new State of Industry Study, conducted with KPMG (see above) of more that 450 major global enterprises – shows an increasing majority of customers of traditional shared services and outsourcing feel they have wrung most of the juice offshore has to offer from their existing operations, and aren’t looking to increase offshore investments. When we compare enterprise aspirations for offshore use between the 2014 and 2017 State of the Industry studies, we see a significant drop, right across the board, with plans to offshore services. Organizations are now either looking to make their existing offshore operations more effective, or even reduce them where they can (especially in F&A and HR), using new technologies and smarter process management.
It’s all about future scalability without the linear resource investments
The difference between new style of automation-rich intelligent operations and offshore-centric traditional operations is growing. It’s a bit like comparing the growth of Walmart to that of Amazon – (although it has started to change with its belated online strategy and acquisition of Jet.com), for many decades, the success and growth of Walmart has largely been tied to selling more retail products by continually adding new stores, and continually increasing its supply chain to support them. The firm could produce a linear business forecast that tied revenues to employees and capital infrastructure investments. Expansion and profitability was always dependent upon investing in more people to service need of the increased clientele – both the end customers and suppliers. With Amazon, so much of the customer front end is intelligently automated, adding customers often requires very few additional labor or capital investments – most front line customer support is completely automated, most the point-of-sale promotions are entirely driven by cognitive tools and smart algorithms that tie together customer needs and preferences with all the products on offer.
The offshore model is being dis-intermediated by intelligent automation in a similar way Amazon completely disrupted the traditional retail supply chain
It’s the same dynamic that is impacting the use of affordable offshore people services to be augmented, or even replaced by the almost-free fruits of intelligent automation. While Walmart was always an attractive outlet to push products to market, suddenly that business model is no longer viable when you can push your products to customers without the need for new investments in capital infrastructure or staff.
The emerging brand of more packaged operational services, outcome based services, and As-a-Service offerings – will be much more location neutral. It just doesn’t matter as much to the client where the service is delivered – they will only care if they have a reason to, like compliance, latency, etc… It’s not dissimilar to what’s happening in manufacturing – over the last 20-30 years, it made a load of business sense to displace, for example, 5000 onshore factory workers with the vastly cheaper services on offer in locations such as Taiwan and China, but as manufacturing automation advanced, the same products could be made by 100 workers managing machines. It gradually became more cost effective to bring the work closer to where end customers were situated to speed up inventory replenishment and reduce transportation costs. Why is it any different with finance, or procurement or HR – wouldn’t you rather have support services that were more culturally aligned with your staff and had a better understanding of your business needs?
You could argue that this dramatic shift is caused by automation or a desire for organizations to have more control over parts of their operations. We’ve seen examples of large organizations growing on-shore application development teams, partly because they need additional resources given the increasing numbers of complex customer-facing applications they are designing. But also because the applications needed to address onshore customer needs more directly – with greater personalisation and cultural affinity.
Offshore provides truly effective applications teams in terms of speed of development and technical quality of the final applications, but is less able to deliver the wow factor needed for the digital economy – especially in areas that require cutting-edge design and alignment with emerging digital business models. Also DevOps environments and agile have made on-shore development more cost effective and help deliver the same disciplined development ethos offshore has delivered. This does not mean that application development and maintenance disappears from offshore – far from it. It just means that services being delivered will be from more globally diverse teams and are more outcome-oriented, with offshore services leading the compliance / technical quality aspects of the delivery – at least for applications.
However, we think this is only part of the story, particularly as you move into other process areas, where there isn’t a hugely creative element and the service can be better delivered through automation as processes are standardized (such as back office F&A, HR and Procurement). In addition, areas where cognitive tools and virtual agents are emerging are also slowing the need to add bodies offshore, where self learning systems are really starting to work effectively. This is where the real change lies.
The Bottom Line – No more “location, location, location”, it’s now “skills, skills, skills”…
In the post-digital world, no-one care’s as much about offshore anymore. Offshore is going to be an ever decreasing part of the consideration for operational managers and their C-Suite. Location will still play its part as a cost lever in some circumstances – but it’s becoming a side issue, in most cases. Service is becoming outcome-led and driven by automation – people will add flair and handle exceptions – the HfS/KPMG survey shows that they aren’t thinking about it as an issue. It is either an ingrained part of a legacy operation, which is shrinking over time and a component of a more streamlined automated, As-a-Service delivery model. However, what is clear, is the need for skills to drive business outcomes, and if those can be found offshore, that is a bonus, but not the deciding factor.
The Indian IT/BPO services majors should also be more concerned by President Trump’s stance on outsourcing than any other factor over the last 20 years. Not only is offshoring of IT and BPO slowing because of lessening demand, but increased political pressures and policies being driven by the Trump leadership are completely changing the game. When it comes to IT services and BPO, it’s no longer about “location, location, location”, it’s now all about “skills, skills, skills”.
Since we published our first report on blockchain, we continue to talk to players in the industry about how this fast-moving market is changing and growing. Compared to last year, there’s more discussion about security and privacy (evolving from the “blockchain is unhackable” talking point that was popular last summer,) there’s more talk about non-financial examples like using blockchain to help with supply chain compliance issues, and a hunger to get beyond POCs into valuable operational execution.
Recently we spoke to Santosh Kumar, Rob Ellis, and Mani Nagasundaram from HCL about blockchain trends. HCL shares many characteristics with the players we included in the report, such as:
Basing its blockchain expertise within its financial services practice
Building expertise in some key industry hot buttons like international money transfer, asset tracking, and trade operations
Creating POCs with global banks like one HCL did on cross-border money transfers across subsidiaries
Exploring partnerships with several key blockchain technology vendors like Ethereum and ERIS Industries
Regarding trends, HCL sees a lot happening in security and privacy, as well as regulatory agencies stepping up to help businesses form some governance policies around blockchain. We’ve seen in the past few months that while maybe the blocks in the chain aren’t hackable per se, there have been identity thefts, fraudulence, and further concerns about public blockchain networks.
The HCL team notes that transactions are well executed in blockchain, but identity validation and asset validation are less mature. And valuation of assets still needs to happen in the real world, so they caution over-optimism in moving quickly to broad blockchain adoption.
Also, adoption may be slowed down until we can answer the key question, “who owns the network?” HCL’s current thinking is that there’s likely to be one or two per industry and that moving or crossing networks will be difficult (HfS agrees that network interoperability is a big problem. See my prior blog on network interoperability issues here.)
They also believe that maturity in blockchain comes in three phases and that blockchain mirrors the Internet itself in this maturity curve:
Operating business processes better with blockchain
Changing operations using blockchain
Using blockchain to create new business models, processes, and activities
When you get to the discussion of new business models, HCL has a few scenarios that they share (see Exhibit 1 for an example.) We like HCL’s ability to not just explain the technology in-and-outs, but blockchain’s impact on business. In the blueprint guide on blockchain, we scored providers highly on innovation when they have strong business stories and the ability to demonstrate blockchain’s potential to prospective clients.
Bottom Line: 2017 will be an important validation year for blockchain
As HfS continues to research HCL and its competitors, we’re looking for the following in 2017:
Movement beyond POCs into live implementations
An example of inter-company blockchain work (remember, most POCs right now are intra-company, which is why the network question didn’t come up much this year)
Some hardening lines in the partnership area as the winners and losers on the technology side become clearer and providers get pickier about which vendors they bring into client engagements
We started off the new year at HfS with the launch of the Capital Markets Operations Blueprint last week. This is our first coverage of the key dynamics in capital markets and furthers our BFS research on the back of the HfS Mortgage As-a-Service Blueprint mid-last year.
Policies, politics, and structural market challenges are plaguing capital markets firms, raising the stakes in partnerships with service providers
Going into 2017, we find banks and capital markets firms are cautious as they continue to endure a volatile environment with no signs of letting up. Policy ambiguity across the US and European markets, political uncertainty, and structural market changes continue to plague the capital markets industry. Meanwhile, low interest rates and as a consequence, bank margins across sectors have created new waves of cost pressures. Capital markets firms continue to struggle to generate more revenues to counter their rising cost of capital.
To add to this perfect storm, the revenue-generating aspect of this industry is under fire as well. Capital markets firms have had to abandon categories of products due to new regulations. They are more challenged to attract and retain clients that expect different, digitally enabled levels of service with faster turnaround times across the ecosystem, particularly in wealth management. As more big-ticket fines and penalties hit the headlines, public confidence and trust are continuing to erode, and at the same time, the competitor landscape is expanding for the biggest players with the continued success of community banks, regional banks, and fintech disruptors.
Overall, banks and capital markets firms are severely challenged in predicting strategies for long-term sustainability in a changing market and need to have several strategies in play to meet short-term cost pressures. Traditional cost management from cutting back trading desks and providing front-line compensation have not yielded results at the magnitude required to significantly balance profitability.
As a result, we believe that capital markets firms will undergo large-scale operational transformations in 2017 and beyond.
Since the early to mid-2000s, global technology and business services providers have taken over large parts of the back and middle office processes for banks and capital markets clients. They are now in a unique position to help rethink and run more Intelligent Operations as capital markets clients figure out their strategies to tackle these market challenges. Some of the key buyer-service provider dynamics include:
Back Office Processes Continue to Dominate the Services Landscape: The capital markets operations market started a little over a decade ago with back-office BPO processes offshored to IT service providers. Today, these processes are the majority of work engagements, prominent in 63% of contracts in our analysis. Major service areas include clearing and settlement, corporate actions, reconciliations, fund accounting, collateral management, data management and reporting, investor operations, and product control.
Market Forces and Regulation Stimulating New Demand: With global regulatory bodies placing continual pressure on banks and capital markets firms, there are new areas of opportunity for service providers to step in to help clients meet regulatory compliance requirements in different ways. Regulatory data management and reporting and analytics modeling and model monitoring are some of the biggest areas of growth for service providers.
Industry Staring at Technology-Driven Change: We see multiple initiatives fighting for prioritization within client stakeholders and service providers’ strategies, all related to technology-enabled service delivery in capital markets’ operational processes. Platform-based services, provided as a utility, are sparking new interest from clients especially as these models promise consolidation and economies of scale across internal LOBs and asset classes. Similarly, clients are also driving automation initiatives within each business, led by robotic process automation and some level of machine learning and predictive analytics to improve operational performance for retained and outsourced functions.
What’s next?
Standardization: We see a sort of “gold rush” for standardization in the foreseeable future of the capital markets operations. Service providers, including new entrants and industry veterans, are in a race to find ways to bring more standardization to overcome the significant challenges in data management. The managing director at a midsize PE firm we interviewed remarked, “Although we all have to do reconciliations, everyone’s built up in a certain way. The challenge for a service provider or market utility is not the actual processing but standardization in the upstream data that has to be fed in from various systems and the downstream outputs to different stakeholders like regulators and clients where the reporting requirements may be different.” Even within the walls of one enterprise client, data metrics, logs, and audit terms and the systems that consume them across businesses are varied. The biggest areas of investment for clients in the next few years will be in consolidating and standardizing processes such as reference data management and reconciliations.
Robotic Process Automation: Along with potential cost savings, one of the biggest business benefits of using intelligent automation technologies is the higher level of accuracy and standardization due to the lack of manual errors. It is no wonder that the new breed of automation tools has caught the attention of capital markets clients. We see a strong appetite for automation with RPA at the forefront. In the next year, we anticipate many more implementations, particularly for processes that have not been offshored yet where big bang savings are more possible. In the medium term, the cognitive capabilities and machine learning projects under way today in areas like due diligence and inquiry management will have matured and created more confidence for conservative buyers. This is a big opportunity for new market entrants to come in with an automation-first strategy for displacing incumbents. The key will be in proving domain knowledge by coming to service buyers with industry-specific use cases and examples; don’t expect them to have done the homework in this emerging area.
Industry Expertise: On the subject of domain experience, we see emerging opportunity for providing ongoing guidance to capital markets clients for the changes in and the impact of regulatory reforms on their operations and compliance needs. They have traditionally sought consultative advice from risk advisories and consulting firms, and our primary research reveals that for many clients, most service providers are not perceived by key client stakeholders as experienced enough to take on those advisory roles. We anticipate more acquisitions and strategic partnerships by service providers to bridge this gap as multiple clients in our research state that they would find value in getting advisory input from experienced operations partners.
Overall, banks and capital markets firm in our Blueprint research highlighted – and evaluated—the need for a collaborative service provider that is willing to take risks on critical new initiatives that they plan to roll out in the next 12-18 months.
Bottom Line: Whether it’s automation-led, pure-play BPO services, platform investments to drive BPaaS and/or market utilities, or bringing experienced consultants to address regulatory concerns, this high-stakes market demands service providers that are willing to take risks and invest for the long term.
For more details –including visuals of the market activity and analyses of the service providers—click here to access and download the HfS Capital Markets Operations Blueprint. The service providers included in this report include Capgemini, Cognizant, EXL, Genpact, HCL, Hexaware, Infosys, NIIT Technologies, Syntel, TCS, Tech Mahindra, WNS and Wipro.
A couple of months ago we wrote about meaningless data – the seemingly endless spew of pointless information that just starts to grate. Recently, we’ve started to see another related category of pointless crap – which is probably going to become more prevalent as organizations seek to increase the ease with which information is conveyed to a public that cannot be bothered to read anything anymore.
The category that is pointless crap visualization (PCV). Where an attempt is made to visualize something, often a relatively complex concept and it fails utterly to get the point across. But looks nice and gets attention because they drop some names of big vendors in there.
We recently noticed a thoroughly confusing diagram from one of our analyst colleagues, NelsonHall, that caused us scratch our heads in utter bewilderment:
PCV From NelsonHall
The diagram is supposed to tell you something about the acquisition strategy of the companies in the triangle. We wrote down a couple of questions about what the chart meant, having not read the associated blog post.
It looks like Cognizant is more likely to make acquisitions than IBM? Really? Which seems highly unlikely given the huge difference between the two companies in the past – and the fact that Cognizant has a much smaller war chest for M&As, especially after its massive $2.7bn investment in Trizetto. We suppose you could limit to purely IT services – but a tuck-in acquisition is just as likely to be IP based as it is additional niche skills. Although even then we’d expect IBM to spend a great deal more – its Software group having notorious deep pockets for acquisition. Cognizant have made some significant acquisitions like Trizetto, but like all the offshore firms have been pretty gun shy when it comes to inorganic expansion compared to the big traditional technology firms.
Cognizant/TCS are more likely to acquire than NTT or Fujitsu? Mmmm… Fujitsu has been fairly quiet on the acquisition front for a few years, but you cannot count them out of the acquisition game – they made a few acquisitions in 2016 and made some very large purchases in the past. Given their cloud capabilities in Asia, it seems likely it would want to build on consulting capabilities particularly in Europe and the US. And NTT – certainly we may see a lull in activity as Dell Services gets absorbed, but NTT has been one of the most acquisitive of the services firms over the years, so this again seems slightly at odds. This seems much more likely than TCS, the least acquisitive of the already reluctant offshore providers.
The inclusion of CSC using the CSC logo… er seems a bit unnecessary. In fairness it may just be the choice of CSC as the logo – but CSC is part of HPE and no longer exists – so we do wonder how useful it is to know they won’t acquire…
Also, what is the difference between a “tuck-in acquisition” and active acquisitions? To say that IBM is not an active acquirer seems odd – again it may be a narrow view of just the IT services business, but we’re not sure that view really helps anyone considering IBM as a partner given that any software acquisitions bring IP which add to the richness of the services offerings.
Again the distinction between active and tuck-in is not clear for Accenture – which is certainly the most acquisitive and has a very active strategy with an acquisition made seemingly every week, but some of these will be tuck-in, maybe half of them? You can judge for yourself and look at the list of Accenture acquisitions we tracked in the table below. We did some work on which providers are making digital acquisitions – not with the same list of providers, but it illustrates the scale of Accenture’s acquisition activity, compared with some of the providers on the NH diagram. So we’re not sure the visualization really captures the huge difference in acquisition trails between Accenture and the other pure services companies on the list.
HfS – Just The Deals
It is a challenge to come up with good visualizations – that support data and summarize points being made. We have some way to go converting our list of contracts above into a statement about the different players – but I think if we do something around our acquisitions data we’ll probably convert into an index and visualize as a quadrant (oh no) or a simple bar chart. So in a way you have to applaud NH for trying something new.
To be fair the associated blog made a lot more sense – but the chart fails to reflect what is said or adds much to the understanding – it just throws names at you without any clear reasoning. What the diagram needs to do is illustrate a point or, ideally, provide a short cut to understanding. This doesn’t seem to do either. Frankly, it just obscured any of the valid points being made.
The Bottom Line – in this era of fake news and poor information, analysts have more responsibility than ever to reflect reality
This year HfS is making a clear commitment to visualizing our information better and trying to make our perspective in as clear and concise a way as possible. Like the above chart we may not always get it right – but hopefully, that is where our community comes into play and you will let us know what we get right and what we get wrong.