A modern-era lift and shift: Danske Bank seeks massive cost-to-income ratio improvement with the help of Infosys

As the IT services industry braces for a slowdown in 2023 with large deals taking eons to get completed, Infosys adds some renewed hope to ambitious service providers that big, meaty deals can still get thrashed out, with a 1,400 employee $450 million outsourcing engagement with Danske Bank.

Danske Bank, a Danish multinational banking and financial services corporation founded in 1871, recently announced a new strategy to drive its cost-to-income ratio down to 45% from ~60% by 2026. A few weeks later, it announced the deal with Infosys for “digital and technology transformation,” including the sales of Danske IT, the Bangalore-based captive of Danske Bank that provides IT development and operations.

However, when you venture into regions like the Nordics, the whole premise of cost-to-value differs from deals in the US or UK, as it’s very expensive to displace people. So rather than basing the deal’s value on tangible cost takeout from onshore–offshore employee displacement, firms like Danske have to look to other metrics to justify the cost. Those metrics are tied firmly to the technology and talent (aka “digital”) capabilities Infosys can bring to the table to make Danske a more innovative, efficient, competitive—and ultimately profitable bank.

Danske Bank pivots with new “Forward ‘28” strategy and financial targets

Any enterprise that engages in a “big” outsourcing engagement, defined as $100 million+, does so for specific reasons. In other recent big banking, financial services, and insurance (BFSI) deals we’ve written about, such as Vanguard, State Farm, or Core Logic, the “why” is almost always a palpable need to modernize or risk irrelevancy—or at least minimize disintermediation. For Danske Bank, the “why” is less of a digital catch-up and more of a next-chapter strategy after settling its money laundering fraud debacle and implementing a massive overhaul of its risk and regulatory compliance protocols. 2023 is a new chapter, and Danske Bank is ready to move on with its new Forward ‘28 strategy and financial targets for 2026.

It has reaffirmed its focus as a Nordic bank with specific commercial, retail, and private banking propositions for Denmark, Norway, Sweden, and Finland. It has also unveiled revised financial targets, including a target 45% cost-to-income ratio and the resumption of dividend payments. The bank will execute its strategy by upping investments in digital platforms, advisory services, and sustainability from DKK 3 billion to DKK 4 billion (about $437 million to $584 million). This includes plans to further develop its customer-facing digital solutions and modernize its technology infrastructure to enable better customer experiences and drive operational efficiency. Infosys is a key piece of the execution strategy.

What is this “digital transformation” that improves cost-to-income ratios?

What does “digital transformation” actually mean to banks? There are thousands of answers, but one of the most succinct measures of digital impact is improved cost-to-income ratios.

This ratio simply shows cost as a percentage of income—literally, “How much does it cost to run the bank versus how much revenue is generated?” Digital transformation could impact both sides of this equation—driving new revenue and lower operating costs. The World Bank pegs the average cost-to-income ratio for all banks across 40 developed countries at about 60%. This stat has been remarkably consistent for about a decade.

In reality, there are three value levers to pull in a digital transformation engagement to create shareholder value, and Infosys will use them all:

1.) Better technology foundation. Modernizing core systems, automating broken processes, and moving activities into the cloud can significantly impact cost; less time is needed to oversee processes, legacy applications can be retired and consolidated, and leadership has better data upon which to base decisions. However, companies need to invest in consulting work and new technology to reap the longer-term rewards. Often a provider such as Infosys will apportion these costs over the duration of the contract so there is immediate “value” from the client end.

2.) Access to deep talent expertise at scale. When partnering with a firm like Infosys, by far, the largest lever to pull is the availability of tech and business support talent at much lower labor rates. However, the client would need to be able to offload onshore staff to offset with offshore, which can be problematic and costly, depending on the locations involved. For example, staff in the Nordics and Germany can only be removed at significant expense, but transitioning 1,400 India-based staff over to Infosys to manage and upskill should reap longer-term benefits for Danske.

Firstly, Infosys is vastly experienced at training staff on new technologies and domains, which should make the Danske dedicated “re-badged” staff more effective at their jobs. Secondly, Infosys will supplement Nordic talent with other staff worldwide to plug critical gaps in areas such as cybersecurity, cloud, and AI. Danske Bank emphasizes there are no planned redundancies as part of this deal.

3.) Ability to focus on the core. Danske can now pivot its whole approach to reinventing its banking operations to exploit the capabilities Infosys brings to the table; for example, the Infosys Topaz AI suite and the Infosys Cobalt cloud service set. Instead of being beset by old ways of doing things just to keep the IT wheels on and the regulators at bay, Danske can now focus heavily on achieving business outcomes through better technology deployment. This is where the “income” part of the equation is most impacted.

As illustrated below, the role of strategic IT and business process services firms continues to morph from cost takeout to an ambitious “and” proposition of cost and innovation to work to improve the cost-to-income ratios. It’s the combo pack that drives value.

The Bottom Line: Big transformation outsourcing deals could impact cost-to-income ratios by doing more than banging on the cost-take-out lever.

Congratulations to Danske Bank and Infosys on their new strategic partnership. Danske Bank has done a vital job defining its roadmap and key metrics for measuring its results. These same measures need to be considered in Infosys’ performance. While only the bank can control and drive cultural change, it must work closely with Infosys to drive cost efficiencies and applied innovation to impact cost-to-income ratios. This simple metric should have been dramatically reduced in all banks over the past decade, given the caliber and extent of digital investment. It has not, bearing out the fact that cost reduction alone is not enough.

To conclude, clients and suppliers need to jump to a new S-curve of value creation where the client pays for the performance, not just the effort (See Exhibit below). Performance should be measured based on some attribute of business value, not just cost and efficiency, in this case a cost-to-income ratio.

Most services relationships get stuck at Stage 1 and start witnessing diminishing returns because you just cannot keep squeezing the lemon for more juice. In a performance-driven relationship, the supplier and client share the risk and reward while providing services at the lowest cost possible starts to become a given.

Time will tell whether Danske and Infosys can truly make the leap into a successful Stage 2 relationship, but we laud the way this engagement has come to fruition, with the necessary C-suite attention and focus from both sides.

Posted in : Artificial Intelligence, Banking, BFSI, IT Outsourcing / IT Services, Sourcing Best Practises

Comment0 ShareThis 2097 Twitter 0 Facebook 0 Linkedin 0

Most benefits administrators are missing the opportunity to address a multi-generational workforce

HFS Research has gone where no one has been before as we take a serious look at benefits outsourcing and expose how many of today’s providers still operate in the dark ages.

For our groundbreaking HFS Horizons: Employee Benefits Administrators, 2023 report, we evaluated 25 benefits administrators and service providers (see Exhibit 1) and interacted with 125 enterprises that contract with them. It’s the most extensive slate and comprehensive study on this industry yet. US healthcare is shifting: Employer medical underwriting surpassed commercial insurers, the aspirations of a post-pandemic multi-generational workforce continue to evolve, and the proliferation of technology is accelerating—yet benefits administration appears stuck in a legacy paradigm.

We evaluated the 25 benefits administrators for their ability to address the cost (Horizon 1), experience (Horizon 2), and health outcomes (Horizon 3) for employees globally.

Exhibit 1: Only two of the 25 benefits administrators and service providers addressed the triple aim of care: reducing costs, enhancing experience, and impacting health outcomes

Note: Service providers in each Horizon are listed alphabetically.

The multi-generational global workforce has constantly evolving expectations

Five generations coexist in the current workforce; they have distinct world views, varied aspirations, and different pandemic recovery strategies. Yet, when it comes to benefits—whether health (wellness and healthcare), wealth (financial planning and retirement), or professional (career and training)—there is an extraordinary bias toward a one-size-fits-all model. Employers and benefits administrators are guilty of not sufficiently investing in understanding their employees’ evolving needs and not reimagining how they address their health, wealth, and professional needs beyond traditional solutions.

Same old, same old

Many benefits administrators are satisfied with delivering the same set of services through the same modalities, year after year. The intrinsic apathy is befuddling, given that generational needs continue to evolve and remain unmet. According to the US Centers for Disease Control and Prevention (CDC), the US population’s life expectancy has regressed to 1996 levels—in 2019 it was 79 and now it is 76.1—and 60% of the population has a chronic condition such as diabetes, obesity, or hypertension. The Employee Benefit Research Institute (EBRI) indicated that in 2023, Americans’ confidence in retiring comfortably has declined. Given these deteriorating statistics, benefits administrators and employers must do more to address the deterioration in their sphere of influence and reimagine a paradigm to keep improvements sustainable.

A new generation of providers is leveraging technology to make a difference

Still, there is hope. A new generation of providers is leveraging technology to make a difference. Technology enablement is driven by a slow paradigm shift that engages employees, converts feedback into solutions, and measures outcomes beyond the tactical. These benefits administrators utilize data smartly and effectively to improve benefits utilization rates, track efficacy, and improve access. The proliferating mobile apps are becoming table stakes, while many firms are making usability a foundational capability to address experience gaps. The extensive use of technology impacts costs, though the underlying cost of benefits is slowing the progression of reducing costs materially.

The Bottom Line: The needs of today’s multi-generational global workforce require a new paradigm—a challenge benefits administrators must address immediately or be disrupted.

The HFS Horizons: Employee Benefits Administrators, 2023 report includes detailed profiles of each benefits administrator and service provider, outlining capabilities, strengths, provider facts, and development opportunities.

HFS subscribers can download the report here .

Posted in : Healthcare, Healthcare and Outsourcing, HFS Horizons, OneOffice

Comment0 ShareThis 125 Twitter 0 Facebook 0 Linkedin 0

Ten tectonic reasons why the shift to ChatGPT-4 from ChatGPT-3.5 will change your world

|

Most people hell-bent on criticizing ChatGPT don’t realize the current version is merely a prerequisite for a much more powerful version of the technology: GPT-4.  Since its launch, ChatGPT has rocketed to 100m users in 60 days and already boasts 13m daily users – it is most probably the greatest AI invention ever.

Overnight, you have been gifted with software providing an incredible ability to generate human-like text, understand and respond to queries, perform simple tasks, and even hold a conversation. When you fully immerse yourself in GPT-4, you’ll quickly see how rapidly the errors of the previous version have been ironed out with some significant performance improvements:

Ten tectonic shifts from GPT-3.5 to GPT-4

1.  Scale, speed, and power – up to 10x for information synthesis and language patterns. GPT-3.5 had a max request value of 3,000 words. GPT-4 has two variants, one with 6,000 words and another with 24,000.

2. Code-writing significantly improved. Its ability to generate code snippets or debug existing code can reduce workloads of several weeks down to mere hours:

“Write code to train A with dataset B.”

“I’m getting this error. Fix it.”

3. Greater ability to respond to emotions expressed in the text. GPT-4 can recognize and respond sensitively to a user expressing sadness or frustration, making the interaction feel more personal and genuine.

4. Handle more complex natural language processing tasks – such as natural language understanding, automatic text generation, and dialogue systems.

5. Can accurately generate and interpret text in various dialects and languages – such as semantics in regional or cultural differences to meet the needs of global users.

6. GPT-4 can properly cite sources when generating text.  Critical to help individuals, enterprises, and academia govern risks of plagiarism and inaccuracy. GPT-4 performs exceptionally well in various standardized tests, including the BAR, LSAT, GRE, etc.

7. GPT-4 solves complex mathematical and scientific problems like astronomy, physics, chemistry, and biology.

8. Much more creative and collaborative. It generates stories, poems, essays and even jokes with improved coherence and creativity. Can edit and collaborate with users to generate creative and technical writing tasks, marketing copy, process design, and even song compositions while learning a user’s writing style.

9. GPT-4 has eyes.  GPT-4 has the ability to analyze images. Users can ask ChatGPT to describe a photo, analyze a chart, or even explain a meme.

10. GPT-4 is the spark that ignites the AGI bomb. GPT-5 will take us even closer to this AGI (Average General Intelligence) milestone of software possessing video modality, deep sensory perception, creativity and fine motorskills.  The way in which everything is experienced is in play.

The Bottom-line:  The GPT-4 impact creates a whole new way you must think about business operations

GPT-4 is poised to have a dramatic impact on business cases, and the S-Curve we once knew as linear now has a huge kink in it:  where we could save 20% here or 30% there with the use of smart automation, chatbots and simply using better software and cheaper labor aligned to better processes, is now up having a major shake-up – and this will happen quickly.

For example, one onshore call center operation has hooked up a GPT-4 bot to its Salesforce system and can already see how 50% of its staff can be reduced within months.  There are many, many other cases quickly emerging – they are emerging almost daily as we all tinker with the disruptive potential this is going to have:

The difference with the old S-Curve is that simple tasks can be learned, honed, and replicated, and even the prompt engineers will be automated once the process is running smoothly.

At the moment, GPT4 has a cap of 25 messages every three hours, but when this is opened up, the floodgates will be open, and I can only leave you with three things to take away from this:

  1. We need to comprehend how AI works and its impact on the world;
  2. We need to understand how AI will affect the real world;
  3. We need to learn how to make money with AI.

Access to AI has now been democratized,  and learning to ask the right questions is still being learned by the masses.  You must all adapt quickly and use AI to your advantage. Your only limit is your ability to ask the right questions.

Posted in : Artificial Intelligence, Automation, ChatGPT, Customer Experience, Emploee Experiences, GPT-4, IT Outsourcing / IT Services, The Generative Enterprise, Uncategorized

Comment0 ShareThis 3333 Twitter 0 Facebook 0 Linkedin 0

Are you providing services for the Generative Enterprise? HFS is researching who’s got what it takes

| ,

We know these are the early days of Gen-AI,  but the speed of adoption is breathtaking, and we need to understand how well-prepared services experts are advising enterprises on how best to roadmap their generative journeys.  For example, since its launch, ChatGPT has rocketed to 100m users in 60 days and already boasts 13m daily users – it is most probably the greatest AI invention… ever.

Overnight, your firm has been gifted with software providing an incredible ability to generate human-like text, understand and respond to queries, perform simple tasks, and even hold a conversation… are you really up to the task of winning in the era of the Generative Enterprise?

  • HFS is launching the industry’s first competitive analysis of professional services firms and the value they are creating with enterprise clients with the adoption and experimentation of generative AI tech
  • HFS’ Generative Enterprise ‘articulates the pursuit of AI technologies based on Large Language Models (LLMs) like ChatGPT and GPT-4 to reap huge business benefits to organizations in terms of continuously generating new ideas, redefining how work gets done and disrupting business models steeped in decades of antiquated process and technology’.
  • HFS will determine the Generative Enterprise Services Market Leaders, Enterprise Innovators, and Disruptors across leading and emerging services firms
  • HFS CEO Phil Fersht will be leading the research, supported by Executive Research Leader David Cushman, and key HFS other research leaders Saurabh Gupta, Melissa O’Brien, Tom Reuner, and Niti Jhunjhunwala.
  • The study will kick off in July 2023 and be released in September/October 2023 with a hugely anticipated impact across the global HFS networks

If you work for a services firm providing early-stage generative AI services or you’re an enterprise leader seeking to share your experiences and vision with us, please drop us a note here.

Happy Generating folks!

Posted in : Artificial Intelligence, Business Process Outsourcing (BPO), ChatGPT, GPT-4, IT Outsourcing / IT Services, OneEcosystem, OneOffice, The Generative Enterprise, Uncategorized

Comment0 ShareThis 387 Twitter 0 Facebook 0 Linkedin 0

ServiceNow can become the digital foundation of the Generative Enterprise™

| ,

Operations leaders face some seriously unprecedented challenges. They have to balance the macroeconomic Slowdown with the Big Hurry to innovate and keep up with pacesetters. Yet, it is not a question of doing one or the other—they have to address both challenges simultaneously. Thus, a digital foundation is essential for survival. 

Let’s hear what HFS Executing Research Leader, Tom Reuner, has to share about his exhausting learnings from the 2023 Horizon report on ServiceNow services…

A disruptive ServiceNow ecosystem is operationalizing the journey toward the Generative Enterprise™

The pacesetters are taking the road to autonomous operations, but generative AI detours them – in fact, it’s providing the whole industry with a massive detour! The goal is autonomous, data-driven decision-making and exception processing. Machines aren’t taking over the world, but executives need an autonomous mindset. The focus is on making smart decisions based on the data the systems teams create. This is where the vibrant ServiceNow ecosystem cuts in.

ServiceNow can become the digital foundation of the Generative Enterprise™. The vanguard of service providers in that ecosystem is in a pole position to operationalize that journey, as HFS’ seminal study on the ServiceNow provider landscape highlights.

ServiceNow’s rapidly expanding capabilities are driving operational change

The pace of change and maturation across the ServiceNow ecosystem is astounding. This is less about headline-grabbing announcements, such as releasing one of the large language models (LLM) for code generation ahead of ServiceNow’s big conference in Las Vegas. Rather the ecosystem is pivoting from a focus on implementation to a focus on transformation. ServiceNow is no longer an IT-centric capability discussion but about enabling transformation outcomes ranging from industry solutions to GBS to ESG and beyond.

Operations leaders benefit from ServiceNow’s rapidly expanding capabilities and the partner ecosystem’s innovative solutions and approaches, leading to a bifurcation in that ecosystem. Our report focuses on the vanguard of that ecosystem enabling broader transformation, while many other service providers (outside of our report) remain focused on implementation only.

The ServiceNow ecosystem is pivoting to transformation

The transformational outcomes go far beyond ServiceNow’s heritage in IT workflows. If anything, the broader market has not yet woken up to the fact that half of ServiceNow’s new revenues come from business workflows. Sometimes you scratch your head listening to providers talking about the transformation journey they are enabling because you wouldn’t have thought they were talking about ServiceNow as the underlying platform. For example, take TCS’ supply chain transformation for a leading manufacturer in APAC. It integrated existing ERPs into one system to streamline workflows and data collection and supplanted core ERP capabilities with ServiceNow functionalities. Suffice it to say those engagements are highly disruptive.

Emerging themes and capabilities take ServiceNow into new buying centers

With this pivot to transformation, we see themes emerging that many wouldn’t associate with ServiceNow. Enabling GBS journeys is a red-hot topic, yet only very mature organizations take their workflows cross-domain or even cross-function. At the same time, Accenture is pushing capabilities deep into BPO—beyond having its SynOps platform built on ServiceNow. As with ERP modernization and application management services (AMS), all these transformation journeys take ServiceNow into new buying centers. Its traditional non-IT buying centers are in customer and employee services.

Ecosystem engagement models are emerging

The other development that surprised us was the emergence of ecosystem engagements. Especially for emerging themes such as ERP modernization, AMS, and cloud operations partners such as Celonis, Dynatrace, and AppDynamics are coming to the fore. Dynatrace and AppDynamics are broadening ServiceNow’s AIOps and observability capabilities, and providers like Atos offer automated remediation. At the same time, Celonis is re-entering the ServiceNow scene. In 2021, ServiceNow and Celonis announced a partnership, and we expected them to end up with the nuptials. It went quiet, but Celonis is re-emerging with a broad set of use cases.

Pure plays are scaling out

Yet, the ServiceNow ecosystem is not just about the big GSIs. Pure plays like Thirdera and NewRocket are leveraging M&A to scale out. Plat4mation is the poster child for industry solutions in manufacturing, while Cask is deeply entrenched in the public sector. GlideFast’s sweet spot is taking over projects that have run into challenges, referencing the deep technical knowledge of the platform. The leading pure plays are scaling up and have surpassed the revenues and capabilities of many GSIs. They drive scaled transformations and build out industry-led solutions. They are strong provider choices just outside Horizon 3.

Horizon 3 market leaders are demonstrating transformational outcomes

Last but by no means least, congratulations to the Horizon 3 market leaders. These leaders’ shared characteristics include blending a compelling vision of digital transformation with deep ServiceNow capabilities. The wheat gets separated from the chaff when providers demonstrate transformational outcomes enabled by ServiceNow rather than depicting ServiceNow roadmap thinking.

Accenture pushes the innovation envelope by covering eight industries with specific deep solutions. Perhaps the most telling aspect of its approach is that ServiceNow capabilities are no longer the centerpiece of the narrative. The narratives have shifted to transformation, and the transformational outcome has moved to center stage. Deloitte has been the launch partner for FSI (financial services and insurance) industry-led solutions and is scaling out GBS (global business services) engagements, while DXC, after a transition period, is getting its mojo back with differentiation in operationalizing cloud transformations.

EY is kicking the tires on all things advisory and risk while scaling out GBS. KPMG has a similar approach but also spearheads an ESG (environmental, social, and governance) solution in partnership with Celonis. Lastly, Infosys blends the service management process and domain consulting expertise using investments in the ESM Café platform as differentiation to enable a productized delivery approach. Exhibit 1 outlines the detailed rankings of our research.

Exhibit 1: The vanguard of the ServiceNow services ecosystem

The Bottom Line: The ServiceNow ecosystem is pivoting toward transformation, with the Generative Enterprise as the next frontier.

ServiceNow is no longer a capability discussion. Yet, there is a lack of clarity on the new IT operating model. There is agreement on the experience outcomes enabled by workflows designed in the cloud. The more organizations accelerate transformation initiatives, the more service providers need to provide guidance on designing a cloud target operating model. It is abundantly clear the next frontier is the hugely disruptive context of organizations having to deal with the impact of generative AI. HFS plans to lead the way in this seismic shift.

HFS subscribers can download the report here .

Posted in : Artificial Intelligence, Automation, HFS Horizons, service-management, service-provider-analysis, The Generative Enterprise

Comment0 ShareThis 687 Twitter 0 Facebook 0 Linkedin 0

Why lazy transactional lawyers should be very scared of GPT-4

|

In the land of the blind, the one-eyed lawyer will not be king for much longer. In my view, ChatGPT is exposing lazy people compared to smart, diligent ones, and lawyers are no exception to the rule – everyone is being held to account by their ability to use LLMs properly and professionally.

And this goes to anyone in knowledge professions, such as technology, marketing, research, science, consulting, accounting, etc.  You are only as smart at what you know and how intelligently you interact with the Internet to enhance your knowledge.  But let’s focus on this ChatGPT disruption of lawyers because who doesn’t love to pick on lawyers?

I’ve been riveted to the cases of lawyers and ChatGPT recently, where lazy lawyers are getting smashed in the media using bad ChatGPT information.  Let’s get to the point, with the current version of ChatGPT, a lot of bad information is recycled, and some people are not smart enough, or too lazy, to check it properly. It’s the same with people researching for exams or corporate presentations – if you know what you’re looking for, ChatGPT can save you a bunch of time and make you far more productive – and even appear more knowledgeable.

The technological shift from ChatGPT-3.5 to ChatGPT-4 will have a seismic impact on the legal profession

Moreover, the impending upgrade from the current version of ChatGPT-3.5 to ChatGPT-4 is seeing a ten-fold increase in information synthesis power, a much greater ability to cite facts correctly, find nuances and mistakes in information and keep refining its capabilities.  In fact, one team of GPT-4 experts has claimed that while ChatGPT-3.5 came in the bottom 10% of the Uniform Bar Exam, GPT-4 passed with flying colors approaching the 90th percentile.  And this is from a team of legal experts at Stanford Law School.

Lawyers: the lazy versus the skilled will be exposed 

In the case of these current legal blunders, lazy lawyers are being exposed because they are – let’s face it – too apathetic to keep current in their jobs and always looking for shortcuts to bill their clients insane amounts of money.

“Strategy” is needed in cases where the law is open to interpretation and the outcome is not cut and dried. Many lawyers can be immensely valuable – and I have been in awe of one lawyer who showed more skill, emotional intelligence, and insight than I have probably witnessed in my entire career. There are also many others who are clearly diligent and smart, who I would use again.

However, I have also worked with many (and observed many) who are simply a huge waste of money. Having lived through many contractual negotiations, there is rarely any “strategy” from some lawyers. They are highly-paid billable administrators following processes and delegating most of the work to juniors to rack up the billings.

It’s these lawyers who should be very scared of GPT4, especially in areas like outsourcing where most of these contracts are cut and dried, and there is very little room for “innovation” for anything beyond keeping on the green lights.

The Bottom-line: In the land of the blind, the one-eyed lawyer will not be king for much longer

When you’re in a situation where lawyers and procurement are sparring over the useless minutiae of standard contracts, you are simply wasting hundreds of thousands (or millions) of dollars.  Seriously, what is the point of paying $1m+ to draw up a valueless, standard outsourcing contract when you can find a smart lawyer who can do it for a fraction of the price using sophisticated LLMs?

The smart clients are those who know how to manage lawyers, hold them to set budgets, and know where they are useful beyond being glorified – and very expensive – process followers. And those lawyers who know how to use ChatGPT to be more productive – and continually increase their knowledge – will quickly rise to the forefront.

For example, would you go to a dentist who hasn’t read a dental journal in 20 years or uses the latest software and equipment?  Or course you wouldn’t!  If my lawyer was super in-tune with their practice area, I would want them to be 20%+ smarter and more productive because they know how to use ChatGPT properly.  I want more for less, and GPT-4 will deliver that to those who learn how to use it effectively.

Posted in : Artificial Intelligence, ChatGPT, GPT-4, Large Language Models (LLMs), Legal Services Outsourcing, Outsourcing Advisors, Sourcing Best Practises, The Generative Enterprise

Comment0 ShareThis 240 Twitter 0 Facebook 0 Linkedin 0

Cognizant gets savvy with Ravi to resurrect its mojo at the intersection of industry and technology

Many people have viewed Cognizant as losing its mojo over the past few years, with staff attrition among the worst in the services industry last year, a demoralized Indian organization, and a general lack of raison d’être.  What had been the poster child for modern offshore-centric outsourcing for a decade and a half has struggled since activist investor Elliot Management squeezed the life out of the firm in 2017.

“Why Cognizant”. Can the poster child for spectacular offshore-centric services growth find a new raison d’être?

Fast-forward to 2023 and the firm has a charismatic new CEO at the helm, Ravi Kumar S, who is looking to reinvigorate the firm’s culture while also setting out a new course for growth in the era of The Generative EnterpriseTM.  A noticeable uptick in bookings this year is already indicating that the Cognizant mojo is starting to reemerge.

Back in the good old days, the firm could do little wrong by challenging Accenture’s strategy – driving a hard-digital bargain and offering a simplified approach that many clients wanted:  easy to partner with and able to deliver what they wanted at much more competitive rates.  In short, many clients wanted to work with Cognizant because they loved the energy and simplicity of the firm, which was in stark contrast to the consulting-led arrogance of the transitional IT services model.  Simply put, client-centricity was always the table stakes for the firm during its rampant growth days.

Cognizant had achieved what most of the industry still fails at today: Everyone understood the “Why Cognizant”, versus just the “what” and the “how”.

In fact, Cognizant can genuinely lay claim to “inventing” digital with its 2012 “SMAC” stack philosophy, which was swiftly followed by Accenture’s 2013 re-branding the SMAC stack as “digital”.  “They think like we do” was one of Accenture’s leaders’ declarations at an analyst briefing in 2016.

Sure, Cognizant, at $20bn, still has an array of outstanding capabilities, but without a clear message to the market, it has become difficult for enterprises to understand what makes the firm a desirable transformational partner that can deliver both cost and innovation impact.  Winning by embracing heritage means reinforcing the three strong strengths in its roots – a confluence of industry and technology, flexibility and client centricity and entrepreneurial spirit

The firm needs a new identity, renewed direction, and a reenergized culture to reclaim its former glory.  However, the precise ingredients that provided the magic formula in the past may not be the right ones in the medium-long term as the services industry faces the vast dichotomy of transforming clients at speed and pre-inflationary prices.

Enter new CEO Ravi Kumar S, former Infosys rainmaker, ready to right the ship. The HFS team descended on Cognizant’s 2023 US Analyst and Advisor Summit – the first significant analyst event since Ravi took the helm – ready to hear the master plan.

Victimized by its past success, Cognizant became encumbered with low-value work while lacking a spark to attract new business.

Cognizant is the firm that made digital real for various industries over the past two decades. Its digital focus purveyed a powerful value proposition for clients and investors, yielding substantial dividends, revenues, and profits. But as digital became Horizon One table stakes, Cognizant became encumbered with supporting technologies, processes, and agreements associated with yesterday’s tech – not theRead More

Posted in : Artificial Intelligence, Automation, BFSI, Business Process Outsourcing (BPO), Healthcare, IT Outsourcing / IT Services, Manufacturing, OneOffice, Talent and Workforce, The Generative Enterprise, Utilities & Resources

Comment0 ShareThis 3748 Twitter 0 Facebook 0 Linkedin 0

AI-washing is taking over humanity…

| ,

While everyone a year ago thought that nuclear war could threaten humanity’s future yet again, 61% of Americans now say that AI threatens Humanity, according to a new IPSOS/Reuters poll of Americans. 70% of Trump voters believe this, compared with 60% of Biden’s. Not quite sure why we shared that last stat, but it seems to convey how ridiculously this new fad of “AI-washing” is taking us over.

AI means both Everything and Nothing

On the back of the generative AI hype, “AI” has quickly become the new catch-all phrase in modern IT, despite being around for 50 years. People have never been so aware of fake news, internet scams, security breaches, etc., as the public trust in technology reaches a new low.

Massive public misidentifications are making the AI term both a scapegoat for unpopular job layoffs and a magic hype-wand for vendor marketing, as literally every firm touching technology is launching their “GenAI” suite of offerings on a daily basis. The pressure is on executives, investors, public decision-makers, and influencers to skill-up fast and learn how to approach the AI craze with cunning instead of credulity.

We must learn to question what is meant by “AI” and stop it everywhere we lack an agenda or a justification. While the same lack of meaning can be said for the term “Digital,” at least “Digital” tends to be used in a positive context to describe “modern technology,” whereas “AI” is currently being used to describe pretty much anything. AI’s use has become so vague it essentially means “modern computing” in many cases.

However, AI is bloody everywhere

Since the public release of ChatGPT in November, “AI” has been snowballing in usage and popularity. Take a simple Google trend search, and you’ll see the meteoric rise of the term, with “AI” quadrupling since November. In this timespan, most people everywhere have encountered it.

Whether you are a business leader looking for the next innovation to drive profit or cut costs or a parent to a school child getting ready for exams, AI has been doing the rounds at dinner tables and coffee meetings as well as getting a high share of attention on mainstream TV news, from journalists and politicians across the globe. Even many people’s grandparents ask about it as if it’s some sudden new thing.

AI becomes a fashionable excuse to sack people

Back in the days when jobs were being cut because of “outsourcing,” there was always political uproar, and evil corporates were vilified for destroying livelihoods to save a few dirty dollars. I’ve even had protesters demonstrating outside of conferences with the “O” word plastered over them. Suddenly these same corporates (most of whom have already outsourced staff to the bone) are victims of the evil realities of technology where they have no choice but then whack thousands more “because of AI”. Puh-lease… is AI now some dreaded disease inflicted on our corporations where we have no choice but to fire people to survive? Talk about AI-washing our way to Disneyland of Delusion…

For example, in a recent article, BBC explained how Telecom giant BT was planning to cut 55.000 jobs during this decade, with more than 10000 of these coming “from using new tech including AI.” However, the largest bulk of the 55,000 layoffs is projected to stem from BT finishing the rollout of fiber technology, a massive long-term strategic project involving thousands of workers. In turn, the success of this project would further reduce maintenance needs due to fiber’s higher durability.

The story was thus, in essence, about technology efficiency gains, reduced waste, and the success of a strategic project – 15,000 layoffs would come from finalizing the project, and 10,000 from reduced maintenance. What was the headline of this article?: “BT to cut 55,000 jobs with up to a fifth replaced by AI”. While most BT cuts have nothing to do with AI, AI is still in the headline. A more accurate headline for the BBC article could have been: “BT to cut 25,000 jobs due to fiber technology” – it could even get a positive spin: “BT to reduce waste and cut cost due to low-maintenance fiber technology.”

We are yet to see any materialized mass layoffs directly related to AI

We are likely to see increases in these supposedly AI-induced layoffs that are not entirely related to AI, and these will, in turn, most probably also increase the scaremongering across ardent AI reactionaries. However, the reality is that we are yet to see any materialized mass layoffs directly related to AI. Although there will surely be layoffs (like IBM envisioning 7.800 fewer workers in 5 years related to AI), there is no indications that the layoffs will not be offset by massive collective investments made into AI technology (OpenAI already worth $30bn) or other jobs. Goldman Sachs anticipates 300m full-time jobs exposed to automation, and this message took headline in a recent Forbes article in a similar vein to the BT news mentioned above, with AI also here the culprit at center stage. But in GS’ actual report, the prediction is quickly followed up with: “Worker displacement from automation has historically been offset by the creation of new jobs.”  As so often before, could it be that we will see more of a restructuring of the workforce than a complete collapse? Very possibly so.

Two primary perspectives, then, are tangible and reasonable: AI will impact our jobs, and AI will spur the reinvention of and investment into other, new jobs. Our first POV on ChatGPT in December highlighted precisely this – that we will see impacts on our jobs and enhancements of our productivity but no actual job removal yet – it is simply not visible nor historically justified. The “misleading impression of greatness” that ChatGPT has stirred (quote by Sam Altman) has also created, in one sweeping move, a misleading impression of AI dystopia. Remember when Gartner said your next boss would be a bot during the RPA craze?

The Bottom Line: let’s learn from this example and keep focused on the task at hand – improving and enhancing the way we work – and stick to concrete use cases instead of idealistic meta-narratives.

As an industry, we will do wise to start spreading the simple word that not all algorithms are AI –and that the generative AI we are currently enthusiastic about is still very much an algorithm. We can be sure the spread of AI as a term and as a technology is not slowing down or losing any steam, but we cannot be sure that the term and the tech will remain focused on the same thing. The tangible and productive AI we have today is getting unhinged from public discourse, and public discourse is power in modern democracies, markets, and minds. After all, we are anticipating a new economy, not no economy.

Posted in : Artificial Intelligence, ChatGPT, GPT-4, Talent and Workforce, The Generative Enterprise

Comment0 ShareThis 309 Twitter 0 Facebook 0 Linkedin 0

Seven Golden Recommendations to Reinvent Ourselves for the Generative Enterprise

|

The Pandemic will forever go down as a seismic game changer in our lifetimes – and our careers.  The whole 2+ year experience took a lot of us, cost so many people loved ones and changed the work/life perspectives for so many.  Now we face a new world where new rules are still being set (and those rules are likely to be no actual rules at all), but we are faced with no choice but to reinvent ourselves if we want to remain relevant in the business ecosystem of the future.

Or we could choose to ignore the change and pray we aren’t assigned to the dinosaur mausoleum anytime soon… it’s critical to prepare ourselves for the Great LLM-ization as AI becomes the interface to the Internet – and to physical business.  LLMs will blow a hole in predictable high-cost operations like call center services and back office business process services, where their entire business models will eventually become defunct in the wake of technological and behavioral change.

Nothing is quite like it was before, and many of us struggle to adapt to these new uncertain surroundings. 

Scratch that; most of us are struggling to re-adapt because there are no hard and fast business rules or norms these days.  People guard their time religiously, especially when it comes to leaving the house for meetings, events, or office visits.  Meanwhile, many corporates are wracked with politics and toxicity, as many workers panic about upcoming layoffs, driving peculiar behaviors from many.  There are so many people existing at home for weeks on end, praying not to get the sack in the next round of layoffs because they know they lack energy and focus. This stress of uncertainty and unfamiliarity with the emerging work environment is having a major negative effect on our mental stamina.  A lot of folks can barely make it through a full day of meetings these days.

We are living through a time of realization and reevaluation 

While I am not going to advocate people to force themselves into an office (those days are pretty much over), I strongly advocate everyone refocus on adapting to the emerging work environment in order to re-energize themselves. The current environment demands you meet regularly with colleagues and clients, suppliers and ecosystem partners if you want to be visible and relevant in your market.

So bloody well do something about it! You need to find your mental stamina to hustle again, learn new ways of thinking, and prepare for the AI-dominant future.

Seven golden recommendations to reinvent ourselves and survive the onslaught of change

1. Accept the way business works has changed… and will keep changing. Accept the way things are emerging are not necessarily a mirror of the past… how we interact, invest our time, communicate, influence, focus, relax, etc. Get used to change and embrace it.

2. Prioritize meeting in person with clients and colleagues more than ever. Don’t fade away in your cave… the sheer scale of change AI and automation are bringing demands us to lock heads and learn together.  There’s nothing wrong with working from home, but nothing is better than locking heads with our colleagues and other people to come up with inspired ideas.

3. Adopt an autonomous mindset. Make a real effort to stop yourself and others wasting time on tasks, interactions, and processes that can be automated. Focus your time on making smart decisions based on the data your systems and teams create for you.  And developments in LLM models are adding a whole new dimension to the quality of data and insight at our disposal.

4. Change your narrative from ‘effort’ to ‘performance’. The only way to do more with less is to focus on measuring the outcomes we need and the smartest way to achieve them. Work with people who share that mentality.  We need to focus on speed to data, not some trudging, painful set of activities.

5. Invest time in understanding AI tools and capabilities, or get left well behind. Don’t be a dinosaur and get with the program, as AI becomes our interface to the internet. AI is changing business as we know it… and at pace, both electronically and physically.  Large Language Models are quick and easy to learn and don’t need a Ph.D. in mathematics or computer science.  These tools are low/no code environments to develop new workflows or processes that threaten the old guard of programming, where technical staff loved building brick walls to prevent any meaningful business/IT collaboration. Now those walls are crashing down with the onset of these tools that can find patterns in large bodies of text which can predict the next word to write, create sentences and assemble paragraphs of coherent content.

6. Humans are ‘back in the loop’ as we have to prompt AI to get ahead of the LLM explosion.  Prompt engineers are the fastest-emerging class of digitally fluent business/tech designers.  We already using a conversational interface to ask questions and generate text with an LLM in 2023, and we will be unable to avoid it by 2024. Learning how to do this effectively will become a standard skill that all of us are expected to have. You must understand the mental maps to direct what your team does, as LLMs dictate how we interface with the Internet and run our businesses.

This skillset needed to build effective conversational interfaces is not steeped in NLP or deep learning, instead these LLM orchestration skills demand constant self-improvement in the following:

  • Asking questions (design prompts).  ChatGPT, for example, never gives exactly the same response twice. Learn how to prompt your LLM more intelligently with both short and long prompts to compare quality and accuracy.  One of the key benefits of GPT4.0 is the ability to absorb very long prompts (as large as 1000s of words) at rapid speed.
  • How to iterate.  Try asking the same question in different ways, exploring multiple responses to the same prompt, and then comparing the results, detecting bias, and being aware of it.
  • Evaluating responses is critical as much of what we have experienced so far is how ChatGPT gets it wrong.  By asking questions in different ways, discovering contradictions, and asking to self-assess is a key aspect of GPT4 that has improved significantly since the prior version.
  • Eradicating bias by constantly expanding our understanding of bias in LLMs. ChatGPT, for example, is biased based on the underlying approach used to build the LLM and the data used to train it.
  • How to generate new ideas (Generative Thinking).  The big challenge now confronting us as we approach the Great LLM-ization is to constantly seek new ideas beyond the constraints of our current LLM.  You should ask ChatGPT to summarize, synthesize and find the contradictions in the result it creates.  Invest time in learning how conceptual blending approaches are evolving.

7.  Understand the significance of the technical improvements of GPT-4… it’s the beginning of the Generative Enterprise. 

HFS’ Generative Enterprise articulates the pursuit of AI technologies based on Large Language Models (LLMs) and ChatGPT to reap huge business benefits to organizations in terms of continuously generating new ideas, redefining how work gets done, and disrupting business models steeped in decades of antiquated process and technology.

We are learning the new version of GPT is ten times more powerful, cites sources, understands dialects, and even has eyes…  just click here to learn more.

Posted in : Artificial Intelligence, Automation, Autonomous Enterprise, Buyers' Sourcing Best Practices, ChatGPT, Large Language Models (LLMs), OneEcosystem, OneOffice

Comment1 ShareThis 980 Twitter 0 Facebook 0 Linkedin 0

IBM Watson missed the AI revolution, but Watsonx could become the heartbeat of the Generative Enterprise

| ,

A decade on from the trials and tribulations of IBM Watson, IBM unveiled its multi-model and multi-cloud Watsonx to drive AI-first enterprises – what we are calling “The Generative Enterprise” at HFS.

IBM is describing the platform as a “full technology stack” for training, tuning, and deploying AI models, including foundation and large language models, while ensuring tight data governance controls.  Watsonx.data focuses on the data scientist; Watsonx.ai the application developer; and Watsonx.governance is then used to deploy the model using a data model factory to ensure that AI is used ethically and responsibly.

In our view, Watsonx is the first enterprise-grade offering to address the Generative Enterprise holistically.  Here’s our interpretation of Watsonx:

  • Watsonx.data helps you create the data model. It focused on the data scientist leveraging Red Hat Open Shift to prepare, tokenize, train, and validate internal and external data.
  • Watson.ai helps you ask the relevant questions (design prompts). ChatGPT, for example, never gives exactly the same response twice. Learn how to prompt your LLM more intelligently with both short and long prompts to compare quality and accuracy. One of the key benefits of GPT4.0 is the ability to absorb very long prompts (as large as 1000s of words) at rapid speed.
  • Watson.ai also helps to iterate. Try asking the same question in different ways, exploring multiple responses to the same prompt, and then comparing the results, detecting bias, and being aware of it.
  • Watsonx.governance evaluates responses which is as critical as much of what we have experienced so far as how ChatGPT gets it wrong. Asking questions in different ways, discovering contradictions, and asking to self-assess, is a key aspect of GPT4 that has improved significantly since the prior version.
  • Watsonx.governance helps eradicate bias by constantly expanding our understanding of bias in LLMs. ChatGPT, for example, is biased based on the underlying approach used to build the LLM and the data used to train it.
  • Watsonx overall fosters the generation of new ideas (Generative Thinking). The big challenge now confronting us as we pursue becoming a true Generative Enterprise is to constantly seek new ideas beyond the constraints of our current LLM. You should ask ChatGPT to summarize, synthesize and find the contradictions in the result it creates. Invest time in learning how conceptual blending approaches are evolving.

The Bottom-line: IBM could have been at the center of the AI revolution but was left out as a bystander. Watsonx has the potential to put IBM front and center of the Generative Enterprise

Watsonx seems very well thought through for AI-powered enterprise use cases, especially for horizontal call centers, HR, and F&A. IBM seems to have learned from its original Watson launch by deploying it internally first, launching an apps development platform to demystify the technology. However, the IBM narrative for Watsonx continues to be more technology-centric versus business-centric, which they need to address with their Watsonx narrative.

IBM still woos the CIO budget, but that’s only a third of the total enterprise tech spend.  We believe IBM runs the risk of missing out on the broader CXO budgets but polarizing itself around the CIO.

Posted in : Analytics and Big Data, Artificial Intelligence, Automation, Autonomous Enterprise, Business Data Services, ChatGPT, Cloud Computing, The Generative Enterprise, Uncategorized

Comment0 ShareThis 1594 Twitter 0 Facebook 0 Linkedin 0