How Green is Your Cloud?

5/5 (1)

5/5 (1)

For many organisations migrating to cloud, the opportunity to run workloads from energy-efficient cloud data centres is a significant advantage. However, carbon emissions can vary from one country to another and if left unmonitored, will gradually increase over time as cloud use grows. This issue will become increasingly important as we move into the era of compute-intensive AI and the burden of cloud on natural resources will shift further into the spotlight.

The International Energy Agency (IEA) estimates that data centres are responsible for up to 1.5% of global electricity use and 1% of GHG emissions. Cloud providers have recognised this and are committed to change. Between 2025 and 2030, all hyperscalers – AWS, Azure, Google, and Oracle included – expect to power their global cloud operations entirely with renewable sources.

Chasing the Sun

Cloud providers are shifting their sights from simply matching electricity use with renewable power purchase agreements (PPA) to the more ambitious goal of operating 24/7 on carbon-free sources. A defining characteristic of renewables though is intermittency, with production levels fluctuating based on the availability of sunlight and wind. Leading cloud providers are using AI to dynamically distribute compute workloads throughout the day to regions with lower carbon intensity. Workloads that are processed with solar power during daylight can be shifted to nearby regions with abundant wind energy at night.

Addressing Water Scarcity

Many of the largest cloud data centres are situated in sunny locations to take advantage of solar power and proximity to population centres. Unfortunately, this often means that they are also in areas where water is scarce. While liquid-cooled facilities are energy efficient, local communities are concerned on the strain on water sources. Data centre operators are now committing to reduce consumption and restore water supplies. Simple measures, such as expanding humidity (below 20% RH) and temperature tolerances (above 30°C) in server rooms have helped companies like Meta to cut wastage. Similarly, Google has increased their reliance on non-potable sources, such as grey water and sea water.

From Waste to Worth

Data centre operators have identified innovative ways to reuse the excess heat generated by their computing equipment. Some have used it to heat adjacent swimming pools while others have warmed rooms that house vertical farms. Although these initiatives currently have little impact on the environmental impact of cloud, they suggest a future where waste is significantly reduced.

Greening the Grid

The giant facilities that cloud providers use to house their computing infrastructure are also set to change. Building materials and construction account for an astonishing 11% of global carbon emissions. The use of recycled materials in concrete and investing in greener methods of manufacturing steel are approaches the construction industry are attempting to lessen their impact. Smaller data centres have been 3D printed to accelerate construction and use recyclable printing concrete. While this approach may not be suitable for hyperscale facilities, it holds potential for smaller edge locations.

Rethinking Hardware Management

Cloud providers rely on their scale to provide fast, resilient, and cost-effective computing. In many cases, simply replacing malfunctioning or obsolete equipment would achieve these goals better than performing maintenance. However, the relentless growth of e-waste is putting pressure on cloud providers to participate in the circular economy. Microsoft, for example, has launched three Circular Centres to repurpose cloud equipment. During the pilot of their Amsterdam centre, it achieved 83% reuse and 17% recycling of critical parts. The lifecycle of equipment in the cloud is largely hidden but environmentally conscious users will start demanding greater transparency.

Recommendations

Organisations should be aware of their cloud-derived scope 3 emissions and consider broader environmental issues around water use and recycling. Here are the steps that can be taken immediately:

  1. Monitor GreenOps. Cloud providers are adding GreenOps tools, such as the AWS Customer Carbon Footprint Tool, to help organisations measure the environmental impact of their cloud operations. Understanding the relationship between cloud use and emissions is the first step towards sustainable cloud operations.
  2. Adopt Cloud FinOps for Quick ROI. Eliminating wasted cloud resources not only cuts costs but also reduces electricity-related emissions. Tools such as CloudVerse provide visibility into cloud spend, identifies unused instances, and helps to optimise cloud operations.
  3. Take a Holistic View. Cloud providers are being forced to improve transparency and reduce their environmental impact by their biggest customers. Getting educated on the actions that cloud partners are taking to minimise emissions, water use, and waste to landfill is crucial. In most cases, dedicated cloud providers should reduce waste rather than offset it.
  4. Enable Remote Workforce. Cloud-enabled security and networking solutions, such as SASE, allow employees to work securely from remote locations and reduce their transportation emissions. With a SASE deployed in the cloud, routine management tasks can be performed by IT remotely rather than at the branch, further reducing transportation emissions.
Get your Free Copy
0
Cloud Hyperscaler Growth Will Continue into the Foreseeable Future

5/5 (2)

5/5 (2)

All growth must end eventually. But it is a brave person who will predict the end of growth for the public cloud hyperscalers. The hyperscaler cloud revenues have been growing at between 25-60% the past few years (off very different bases – and often including and counting different revenue streams). Even the current softening of economic spend we are seeing across many economies is only causing a slight slowdown. 

Cloud Revenue Patterns of Major Hyperscalers

Looking forward, we expect growth in public cloud infrastructure and platform spend to continue to decline in 2024, but to accelerate in 2025 and 2026 as businesses take advantage of new cloud services and capabilities. However, the sheer size of the market means that we will see slower growth going forward – but we forecast 2026 to see the highest revenue growth of any year since public cloud services were founded. 

The factors driving this growth include: 

  • Acceleration of digital intensity. As countries come out of their economic slowdowns and economic activity increases, so too will digital activity. And greater volumes of digital activity will require an increase in the capacity of cloud environments on which the applications and processes are hosted. 
  • Increased use of AI services. Businesses and AI service providers will need access to GPUs – and eventually, specialised AI chipsets – which will see cloud bills increase significantly. The extra data storage to drive the algorithms – and the increase in CPU required to deliver customised or personalised experiences that these algorithms will direct will also drive increased cloud usage. 
  • Further movement of applications from on-premises to cloud. Many organisations – particularly those in the Asia Pacific region – still have the majority of their applications and tech systems sitting in data centre environments. Over the next few years, more of these applications will move to hyperscalers.  
  • Edge applications moving to the cloud. As the public cloud giants improve their edge computing capabilities – in partnership with hardware providers, telcos, and a broader expansion of their own networks – there will be greater opportunity to move edge applications to public cloud environments. 
  • Increasing number of ISVs hosting on these platforms. The move from on-premise to cloud will drive some growth in hyperscaler revenues and activities – but the ISVs born in the cloud will also drive significant growth. SaaS and PaaS are typically seeing growth above the rates of IaaS – but are also drivers of the growth of cloud infrastructure services. 
  • Improving cloud marketplaces. Continuing on the topic of ISV partners, as the cloud hyperscalers make it easier and faster to find, buy, and integrate new services from their cloud marketplace, the adoption of cloud infrastructure services will continue to grow.  
  • New cloud services. No one has a crystal ball, and few people know what is being developed by Microsoft, AWS, Google, and the other cloud providers. New services will exist in the next few years that aren’t even being considered today. Perhaps Quantum Computing will start to see real business adoption? But these new services will help to drive growth – even if “legacy” cloud service adoption slows down or services are retired. 
Growth in Public Cloud Infrastructure and Platform Revenue

Hybrid Cloud Will Play an Important Role for Many Businesses 

Growth in hyperscalers doesn’t mean that the hybrid cloud will disappear. Many organisations will hit a natural “ceiling” for their public cloud services. Regulations, proximity, cost, volumes of data, and “gravity” will see some applications remain in data centres. However, businesses will want to manage, secure, transform, and modernise these applications at the same rate and use the same tools as their public cloud environments. Therefore, hybrid and private cloud will remain important elements of the overall cloud market. Their success will be the ability to integrate with and support public cloud environments.  

The future of cloud is big – but like all infrastructure and platforms, they are not a goal in themselves. It is what cloud is and will further enable businesses and customers which is exciting. As the rates of digitisation and digital intensity increase, the opportunities for the cloud infrastructure and platform providers will blossom. Sometimes they will be the driver of the growth, and other times they will just be supporting actors. But either way, in 2026 – 20 years after the birth of AWS – the growth in cloud services will be bigger than ever. 

Get your Free Copy
0
Google’s AI-Powered Code Generator Takes on GitHub Copilot

5/5 (1)

5/5 (1)

Google recently extended its Generative AI, Bard, to include coding in more than 20 programming languages, including C++, Go, Java, Javascript, and Python. The search giant has been eager to respond to last year’s launch of ChatGPT but as the trusted incumbent, it has naturally been hesitant to move too quickly. The tendency for large language models (LLMs) to produce controversial and erroneous outputs has the potential to tarnish established brands. Google Bard was released in March in the US and the UK as an LLM but lacked the coding ability of OpenAI’s ChatGPT and Microsoft’s Bing Chat.

Bard’s new features include code generation, optimisation, debugging, and explanation. Using natural language processing (NLP), users can explain their requirements to the AI and ask it to generate code that can then be exported to an integrated development environment (IDE) or executed directly in the browser with Google Colab. Similarly, users can request Bard to debug already existing code, explain code snippets, or optimise code to improve performance.

Google continues to refer to Bard as an experiment and highlights that as is the case with generated text, code produced by the AI may not function as expected. Regardless, the new functionality will be useful for both beginner and experienced developers. Those learning to code can use Generative AI to debug and explain their mistakes or write simple programs. More experienced developers can use the tool to perform lower-value work, such as commenting on code, or scaffolding to identify potential problems.

GitHub Copilot X to Face Competition

While the ability for Bard, Bing, and ChatGPT to generate code is one of their most important use cases, developers are now demanding AI directly in their IDEs.

In March, Microsoft made one of its most significant announcements of the year when it demonstrated GitHub Copilot X, which embeds GPT-4 in the development environment. Earlier this year, Microsoft invested $10 billion into OpenAI to add to the $1 billion from 2019, cementing the partnership between the two AI heavyweights. Among other benefits, this agreement makes Azure the exclusive cloud provider to OpenAI and provides Microsoft with the opportunity to enhance its software with AI co-pilots.

Currently, under technical preview, when Copilot X eventually launches, it will integrate into Visual Studio — Microsoft’s IDE. Presented as a sidebar or chat directly in the IDE, Copilot X will be able to generate, explain, and comment on code, debug, write unit tests, and identify vulnerabilities. The “Hey, GitHub” functionality will allow users to chat using voice, suitable for mobile users or more natural interaction on a desktop.

Not to be outdone by its cloud rivals, in April, AWS announced the general availability of what it describes as a real-time AI coding companion. Amazon CodeWhisperer, integrates with a range of IDEs, namely Visual Studio Code, IntelliJ IDEA, CLion, GoLand, WebStorm, Rider, PhpStorm, PyCharm, RubyMine, and DataGrip, or natively in AWS Cloud9 and AWS Lambda console. While the preview worked for Python, Java, JavaScript, TypeScript, and C#, the general release extends support for most languages. Amazon’s key differentiation is that it is available for free to individual users, while GitHub Copilot is currently subscription-based with exceptions only for teachers, students, and maintainers of open-source projects.

The Next Step: Generative AI in Security

The next battleground for Generative AI will be assisting overworked security analysts. Currently, some of the greatest challenges that Security Operations Centres (SOCs) face are being understaffed and overwhelmed with the number of alerts. Security vendors, such as IBM and Securonix, have already deployed automation to reduce alert noise and help analysts prioritise tasks to avoid responding to false threats.

Google recently introduced Sec-PaLM and Microsoft announced Security Copilot, bringing the power of Generative AI to the SOC. These tools will help analysts interact conversationally with their threat management systems and will explain alerts in natural language. How effective these tools will be is yet to be seen, considering hallucinations in security is far riskier than writing an essay with ChatGPT.

The Future of AI Code Generators

Although GitHub Copilot and Amazon CodeWhisperer had already launched with limited feature sets, it was the release of ChatGPT last year that ushered in a new era in AI code generation. There is now a race between the cloud hyperscalers to win over developers and to provide AI that supports other functions, such as security.

Despite fears that AI will replace humans, in their current state it is more likely that they will be used as tools to augment developers. Although AI and automated testing reduce the burden on the already stretched workforce, humans will continue to be in demand to ensure code is secure and satisfies requirements. A likely scenario is that with coding becoming simpler, rather than the number of developers shrinking, the volume and quality of code written will increase. AI will generate a new wave of citizen developers able to work on projects that would previously have been impossible to start.  This may, in turn, increase demand for developers to build on these proofs-of-concept.

How the Generative AI landscape evolves over the next year will be interesting. In a recent interview, OpenAI’s founder, Sam Altman, explained that the non-profit model it initially pursued is not feasible, necessitating the launch of a capped-for-profit subsidiary. The company retains its values, however, focusing on advancing AI responsibly and transparently with public consultation. The appearance of Microsoft, Google, and AWS will undoubtedly change the market dynamics and may force OpenAI to at least reconsider its approach once again.

The Future of Industries
0
Ecosystm Leaders Roundtable: New Forces of Innovation and Purpose Impacting Financial Organisations’ Data Resiliency Needs

No ratings yet.

The Financial Services industry has waged their battles well over the past few years – they have embraced digital customer and employee experiences and successfully led with data-driven innovation of products and services.

But to continue to survive and thrive, data and innovation will have to be an integral part of their corporate psyche – more so as they face newer forces of innovation that are rapidly changing market conditions.

  • Open banking is changing the way customers engage with banks and financial solution providers and requires traditional organisations to introduce newer channels.
  • DeFi promises to bypass centralised and regulated financial systems to offer innovative and personalised financial solutions directly to the mass markets.
  • ESG is requiring financial institutions to transition to a low-carbon, and clean-technology economy driving Sustainable Finance solutions and green investment assets.

There is no shortage of additional industry forces and buzzwords that will be added to this list, impacting Financial Services providers in an increasingly volatile economic environment. Organisations that will continue to win are those that have a firm eye on building data resiliency, in response to these newer forces of innovation.

Ecosystm research finds that Financial Services organisations in Singapore are leading with data and AI.

  • 51% consider catching up with competition and the ability to improve customer experience as key drivers of innovation and tech-led transformation
  • 47% have placed Data & AI strategy as an integral part of their tech modernisation initiatives
  • 86% will increase investments in either data solutions, AI/machine learning or automation technologies in 2023

Join us and your executive industry peers for this Executive Leaders discussion on how Financial Services players in Singapore are aiming for greater data resiliency to meet the ever-evolving customer expectations whilst meeting compliance requirements to stay ahead of the digital curve.

0
Executive Retail ThinkTank: Open for Omnichannel – Achieving Seamless Customer Experiences

No ratings yet.

Customers today expect a true omnichannel experience from retailers. They want a seamless service, across the physical stores, online shopping and when interacting with contact centres.

As the Retail industry continues to create innovative customer experiences and strengthen eCommerce capabilities, there are unique business challenges to handle like demand fluctuations, supply chain dependability and being resilient to avoid out-of-stock.

Ecosystm research finds that in the Retail industry:

  • 80% of organizations had to start or re-calibrate their digital transformation efforts in the last year
  • 50% are looking to leverage more digital technology for process automation and customer experience
  • Pricing optimization, demand forecasting, and supply chain optimization are the key focus areas of their AI investments
  • Only 21% of organizations think that they offer a full omnichannel experience

As retailers work to create that omnichannel presence – across digital, contact centre and in-store – they are forced to use a range of different systems and analytical tools to achieve a single view of stock availability, product pricing and customer data. This adds to the complexity of operations and can create delays.

We invite you to join a gathering of experienced peers from the Southeast Asia and ANZ Retail sector, such as Scott Coppock, CIO, David Jones and Country Road Group, as well as senior subject matter experts from Amazon Web Services (AWS) and Infosys to discuss improvements such as:

  • Significantly improving customer experience by enhancing existing systems using headless eCommerce systems.
  • Providing customers with enhanced capabilities such as AI-based recommendation engines.
  • Reducing costs of operation through application modernization.
  • Improving sales forecasting and supply chain planning.

0
Ecosystm Snapshot: Kyndryl Taps AWS to Broaden their Cloud Platform Capabilities

5/5 (1)

5/5 (1)

Last week, Kyndryl became a Premier Global Alliance Partner for AWS. This follows other recent similar partnerships for Kyndryl with Google and Microsoft. This now gives Kyndryl premier or similar partner status at the big three hyperscalers.

The Partnership

This new partnership was essential for Kyndryl to provide legitimacy to their independent reputation and their global presence. And in many respects, it is a partnership that AWS needs as much as Kyndryl does. As one of the largest global managed services providers, Kyndryl manages a huge amount of infrastructure and thousands of applications. Today, most of these applications sit outside public cloud environments, but at some stage in the future, many of these applications will move to the public cloud. AWS has positioned itself to benefit from this transition – as Kyndryl will be advising clients on which cloud environment best suits their needs, and in many cases Kyndryl will also be running the application migration and managing the application when it resides in the cloud. To that end, the further investment in developing an accelerator for VMware Cloud on AWS will also help to differentiate Kyndryl on AWS. With a high proportion of Kyndryl customers running VMware, this capability will help VMware users to migrate these workloads to the cloud and run core businesses services on AWS.

The Future

Beyond the typical partnership activities, Kyndryl will build out its own internal infrastructure in the cloud, leveraging AWS as its preferred cloud provider. This experience will mean that Kyndryl “drinks its own champagne” – many other managed services providers have not yet taken the majority of their infrastructure to the cloud, so this experience will help to set Kyndryl apart from their competitors, along with providing deep learning and best practices.

By the end of 2022, Kyndryl expects to have trained more than 10,000 professionals on AWS. Assuming the company hits these targets, they will be one of AWS’s largest partners. However, experience trumps training, and their relatively recent entry into the broader cloud ecosystem space (after coming out from under IBM’s wing at the end of 2021) means they have some way to go to have the depth and breadth of experience that other Premier Alliance Partners have today.

Ecosystm Opinion

In my recent interactions with Kyndryl, what sets them apart is the fact that they are completely customer-focused. They start with a client problem and find the best solution for that problem. Yes – some of the “best solutions” will be partner specific (such as SAP on Azure, VMware on AWS), but they aren’t pushing every customer down a specific path. They are not just an AWS partner – where every solution to every problem starts and ends with AWS. The importance of this new partnership is it expands the capabilities of Kyndryl and hence expands the possibilities and opportunities for Kyndryl clients to benefit from the best solutions in the market – regardless of whether they are on-premises or in one of the big three hyperscalers.

Cloud Insights
0
IoT is Your Next Data Silo – What Are You Going to Do About It?

5/5 (1)

5/5 (1)

The Internet of Things (IoT) solutions require data integration capabilities to help business leaders solve real problems. Ecosystm research finds that the problem is that more than half of all organisations are finding integration a key challenge – right behind security (Figure 1). So, chances are, you are facing similar challenges.

Challenges of IoT Development

This should not be taken as a criticism of IoT; just a wake-up call for all those seeking to implement what has long been test-lab technology into an enterprise environment. I love absolutely everything about IoT. IT is an essential technology. Contemporary sensor technologies are at the core of everything. It’s just that there are a lot of organisations not doing it right.

Like many technologists, I was hooked on IoT since I first sat in a Las Vegas AWS re: invent conference breakout session in 2015 and learned about MQTT protocols applied to any little thing, and how I could re-order laundry detergent or beer with an AWS button, that clumsy precursor to Alexa.

Parts of that presentation have stayed with me to this day. Predict and act. What business doesn’t want to be able to do that better? I can still see the room. I still have those notes. And I’m still working to help others embrace the full potential of this must-have enterprise capability.

There is no doubt that IoT is the Cinderella of smart cities. Even digital twinning. Without it, there is no story. It is critical to contemporary organisations because of the real-time decision-making data it can provide into significant (Industry 4.0) infrastructure and service investments. That’s worth repeating. It is critical to supporting large scale capital investments and anyone who has been in IT for any length of time knows that vindicating the need for new IT investments to capital holders is the most elusive of business demands.

But it is also a bottom-up technology that requires a top-down business case – a challenge also faced by around 40% of organisations in the Ecosystm study – and a number of other architectural components to realise its full cost-benefit or capital growth potential. Let’s not quibble, IoT is fundamental to both operational and strategic data insights, but it is not the full story.

If IoT is the belle of the smart cities ball, then integration is the glass slipper that ties the whole story together. After four years as head of technology for a capital city deeply committed to the Smart City vision, if there was one area of IoT investment I was constantly wishing I had more of, it was integration. We were drowning in data but starved of the skills and technology to deliver true strategic insights outside of single-function domains.

IoT Quote

This reality in no way diminishes the value of IoT. Nor is it either a binary or chicken-and-egg question of whether to invest in IoT or integration. In fact, the symbiotic market potential for both IoT and integration solutions in asset-intensive businesses is not only huge but necessary.

IoT solutions are fundamental contemporary technologies that provide the opportunity for many businesses to do well in areas they would otherwise continue to do very poorly. They provide a foundation for digital enablement and a critical gateway to analytics for real-time and predictive decision making.

When applied strategically and at scale, IoT provides a magical technology capability. But the bottom line is that even magic technology can never carry the day when left to do the work of other solutions. If you have already plunged into IoT then chances are it has already become your next data silo. The question is now, what you are going to do about it?

Emerging Technology
0
Ecosystm VendorSphere: Oracle’s Emergence as a Key Webscaler

5/5 (1)

5/5 (1)

Oracle is clearly prioritising a rapid expansion across the globe. The company is in a race to catch up with the big 3 (AWS, Google, and Microsoft), and recognises that many of their customers are eager to migrate to the cloud, and they have other options. Their strategy appears to be to rely on third-party co-location providers for most of their data centres, and build a single availability zone per region, at least to start.

Oracle Cloud Rollout Ramps Up

Let us consider the following:

  • Oracle’s network spending level puts it in the range of other webscalers. Focusing only on the Network and IT portion of their CapEx, Oracle has now passed Alibaba. Oracle is also ahead of both IBM and Baidu, which are included in the “All others” category in Figure 1.
Annualised Network & IT CAPEX through 3Q21, Top Webscalers
  • The coverage of the Oracle Cloud Infrastructure (OCI) is impressive. It has 36 regions today (some dedicated for government use), with a plan to reach 44 by year-end 2022. That compares to 27 overall for AWS, 65 for Azure, 29 for GCP; regional competitors Tencent and Huawei have 27 regions each, and Alibaba 25 regions. The downside is that Oracle has only one availability zone in most of its regions, while the Big 3 usually have 2 or 3 per region. Oracle needs to build out its local resiliency rapidly over the next year or two or risk losing business to the big 3, especially to AWS; but the company knows this and is budgeting CapEx aggressively to address the problem.
  • Oracle’s initial reliance on leased facilities may be an interim step. The rapid growth of AWS, Azure, and GCP in the late 2010s was a surprise and Oracle started to see serious risks of losing customers to these cloud platforms. Building out their own cloud base on new data centres would have taken years and cost them business. So, Oracle did the smart thing and leaped into the cloud as fast as possible with the resources and time available. The company has scaled their OCI operations at an impressive rate. It expects capital expenditures to double YoY for the fiscal year ending May 2022, as it increases “data centre capacities and geographic locations to meet current and expected customer demand” for OCI.
  • Finally, Oracle has invested heavily in designing the servers to be installed in its data centres (even if most of them are leased). Oracle was an early investor in Ampere Computing, which makes Arm-based processors, sidestepping the Intel ecosystem. In May 2021, Oracle rolled out its first Arm-based compute offering, OCI Ampere A1 Compute, based on the Ampere Altra processor. Oracle says this allows OCI customers to run “cloud-native and general-purpose workloads on Arm-based instances with significant price-performance benefits.” Microsoft and Tencent also deploy the Ampere Altra in some locations.

Reaching Global Scale

Once Oracle decided to launch into the cloud, its goal was to both grow revenues and also protect its legacy base from slipping away to the Big 3, which already had a growing global footprint. Oracle chose to quickly build cloud regions in its key markets, with the understanding that it would have to fill out individual regions as time passed. This is not that different from the big 3, in fact, but Oracle started its buildout much later. It also has lesser availability zones per region.

Oracle has not ignored this disparity. It recognises that reliability is key for its clients in trusting OCI. For example, the company emphasises that:

  • Each Oracle Cloud region contains at least three fault domains, which are “groupings of hardware that form logical data centers for high availability and resilience to hardware and network failures.” Fault domains allow a customer to distribute instances so “the instances are not on the same physical hardware within a single availability domain.”
  • OCI has a network of 70 “FastConnect” partners which offer dedicated connectivity to OCI regions and services (comparable to AWS DirectConnect)
  • OCI and Microsoft Azure have a partnership allowing “joint customers” to run workloads across the two clouds, providing low latency, cross-cloud interconnect between OCI and Azure in eight specific regions. Customers can migrate existing applications or develop cloud native applications using a mix of OCI and Azure.
  • Oracle allows customers to deploy OCI completely within their own data centers, with Dedicated Region and Exadata Cloud@Customer, deploy cloud services locally with public cloud-based management, or deploy cloud services remotely on the edge with Roving Edge Infrastructure.
  • Further, Oracle clearly tries to differentiate around its Arm-based Ampere processors. Reliability is not necessarily the focus, though. The main focus is contrasting Ampere with the x86 ecosystem around overall price-performance, with highlights on power efficiency, scalability and ease of development. 

Ultimately the market will decide whether Oracle’s approach makes it truly competitive with the big 3. The company continues to announce some big wins, including with Deutsche Bank, FedEx, NEC, Toyota, and Zoom. The latter is probably the company’s biggest cloud win given Zoom’s rise to prominence amidst the pandemic. Not surprisingly, Oracle’s recent Singapore cloud region launch was hosted by Zoom.

Conclusion

Over the long run, the webscale market is getting more concentrated in the hands of a few players; some companies tracked as webscalers, such as HPE and SAP, will fall by the wayside as they can’t keep up with the infrastructure spending requirements of being a top player. Oracle is aiming to remain in the race, however. CEO Larry Ellison addressed this in an earnings call, arguing the global cloud market is not just the “big 3” (AWS, Azure, and GCP), but is a “big 4” due in part to Oracle’s database strengths. Ellison also argued that the OCI is “much better for security, for performance, for reliability” and cost: “we’re cheaper.” The market will ultimately decide these things, but Oracle is off to a strong start. Its asset light approach to network buildout, and limited depth within regions, clearly have downfalls. But the company has a deep roster of long-term customers across many regions, and it is moving fast to secure their business as they migrate operations to the cloud.

Cloud Insights
0
Executive Retail ThinkTank: Open for Omnichannel – Achieving Seamless Customer Experiences

No ratings yet.

Customers today expect a true omnichannel experience from retailers. They want a seamless service, across the physical stores, online shopping and when interacting with contact centres.

As the Retail industry continues to create innovative customer experiences and strengthen eCommerce capabilities, there are unique business challenges to handle like demand fluctuations, supply chain dependability and being resilient to avoid out-of-stock.

Ecosystm research finds that in the Retail industry:

  • 80% of organizations had to start or re-calibrate their digital transformation efforts in the last year
  • 50% are looking to leverage more digital technology for process automation and customer experience
  • Pricing optimization, demand forecasting, and supply chain optimization are the key focus areas of their AI investments
  • Only 21% of organizations think that they offer a full omnichannel experience

As retailers work to create that omnichannel presence – across digital, contact centre and in-store – they are forced to use a range of different systems and analytical tools to achieve a single view of stock availability, product pricing and customer data. This adds to the complexity of operations and can create delays.

We invite you to join a gathering of experienced peers from the Southeast Asia and ANZ Retail sector, such as Scott Coppock, CIO, David Jones and Country Road Group, as well as senior subject matter experts from Amazon Web Services (AWS) and Infosys to discuss improvements such as:

  • Significantly improving customer experience by enhancing existing systems using headless eCommerce systems.
  • Providing customers with enhanced capabilities such as AI-based recommendation engines.
  • Reducing costs of operation through application modernization.
  • Improving sales forecasting and supply chain planning.

0