If we can get machine learning happening in the field, at the Edge, then we reduce the time lag and also create an extra trusted layer in unmanned production or automated utilities situations. This can create more trusted environments in terms of possible threats to public services.
What kind of examples of machine learning in the field can we see?
Health systems can improve hospital patient flow through machine learning (ML) at the Edge. ML offers predictive models to assist decision-makers with complex hospital patient flow information based on near real-time data.
For example, an academic medical centre created an ML pipeline that leveraged all its data – patient administration, EHR and clinical and claims data – to create learnings that could predict length of stay, emergency department (ED) arrival models, ED admissions, aggregate discharges, and total bed census. These predictive models proved effective as the medical centre reduced patient wait times and staff overtime and was able to demonstrate improved patient outcomes. And for a medical centre that use sensors to monitor patients and gather requests for medicine or assistance, Edge processing means keeping private healthcare data in-house rather than sending it off to cloud servers.
A retail store could use numerous cameras for self-checkout and inventory management and to monitor foot traffic. Such specific interaction details could slow down a network and can be replaced by an on-site Edge server with lower latency and a lower total cost. This is useful for standalone grocery pop-up sites such as in Sweden and Germany.
In Retail, k-nearest neighbours is often used in ML for abnormal activity analysis – this learning algorithm can also be used for visual pattern recognition used as part of retailers’ loss prevention tactics.
Working with the data locally on the Edge, creates reduced latency, reduced cloud usage and costs, independence from a network connection, more secure data, and increased data privacy.
Cloud and Edge computing that uses machine learning can together provide the best of both worlds: decentralised local storage, processing and reaction, and then uploading to the cloud, enabling additional insights, data backups (redundancy), and remote access.
IBM announced its intention to spin off its infrastructure services business as a separate public company, allowing Big Blue to focus on hybrid cloud and AI. The newly formed entity, temporarily named NewCo, will offer project and outsourcing services that currently fall under its GTS business unit. NewCo will have a staff of around 90,000 employees and is expected to earn revenue of about $19B. While GTS has experienced declining revenue for some time now, IBM believes that the split will unlock growth and put it on a path to recovery.
Once the Red Hat acquisition closed last year and the tag team of Jim Whitehurst and Arvind Krishna were announced, it became clear that IBM was gearing up to become a leaner, more agile leader in the hybrid cloud space. One of two possible courses seemed apparent – either wither away for years until IBM was small enough to become nimble, or take bold action. IBM has opted for the latter and is likely to be rewarded for it. The new IBM will have revenue of around $59B, well short of its peak at over $100B, but sacrificing turnover for margin and growth gives it a more positive long-term outlook.
Stripping back IBM to become smaller, faster growing, and more profitable, will help solve many of its greatest challenges. Significant investment into growth segments will become more palatable without the financial burden of the declining infrastructure services unit. The well-needed cultural change and drive to think like a start-up will become more practical in the new IBM.
NewCo to Build New Cloud Partnerships
IBM’s infrastructure services unit has had some great success in larger, complex, hybrid cloud deals recently – but at the lower end of the market there have been many head winds. Public cloud providers have eroded what was once a lucrative compute and storage services market. At the same time, application service providers, like Accenture, TCS, and HCL have been pivoting towards infrastructure. Untethering infrastructure services makes a turnaround story more likely, giving NewCo greater flexibility and speed, which clients have been crying out for.
The greatest benefit to NewCo will be the ability to freely partner with other cloud providers, like AWS, Microsoft, and Google. Although IBM has made noises about being willing to embrace its competitors, this was not necessarily implemented on the ground nor was it reciprocated.
It is no secret that GTS and GBS have had a rocky relationship since day one. The split will reassure clients that each of them is agnostic and relieve any internal pressure to partner unless it is best for the client. While elements of this decision look like the unfolding of a long-term strategy that began under Ginni Rometty, it does, however, leave open the question of why GTS and GBS were more closely integrated over the last few years. This also means IBM is moving in the opposite direction to its competitors, who are shifting towards offerings that cover the full stack of services from infrastructure up to applications.
What Lies Ahead for IBM
One detail that is not immediately certain is the fate of IBM security services, which could be integrated with security software at IBM, spun out with the rest of infrastructure services, or even split into consulting and delivery. An important differentiator for IBM has been its ability to build in security at the beginning of transformation projects making final placement a difficult decision.
It might be tempting to predict that next IBM would couple its Systems unit and Support Services to be spun off or sold although Mr. Krishna ruled that out. Over the long term, these are both financially underperforming units but there is an advantage to building the core infrastructure that critical workloads are run on.
Each new IBM CEO has had a make or break moment and Mr. Krishna has decided that his will come early. For the company to thrive for another 100 years it needed to place a big bet and it could not have come soon enough.
Why should a CEO get involved in and have visibility into an organisation’s Cloud investments? There are a few important reasons.
#1 Cloud is not a cost-saving measure – it will enable you to transform
Organisations have matured in their Cloud adoption and no longer evaluate the benefits of Cloud only in terms of shifting CapEx to OpEx. If we look at the benefits of Cloud adoption, reduction of IT costs is not even in the top 3 benefits that organisations are seeking from Cloud anymore. Operational efficiency and collaboration emerge as key benefits (Figure 1) – while some companies still move to the Cloud for the savings, they stay there for other benefits.
This requires organisations to think of Cloud as a technology empowering their infrastructure and services. Cloud acts as an enabler for ease of doing business, real-time data access for productivity increase, and process automation. This impacts the entire organisation. It also involves prioritising the needs of certain functions over others – definitely not what a CIO should have to do.
If we look at just Cloud storage as an example, organisations can no longer have individual functions and their associated shadow IT teams having their own Cloud storage (and collaboration). This often turns out to be more expensive and there is a lack of consolidated view and management. While organisations forge ahead with the dream of having real-time information sharing across functions, a CIO has to consider the entire organisation’s technological and business needs – a CEO is the best person to guide the CIO in translating the organisation’s vision into IT priorities.
#2 In fact Cloud adoption may not cut costs at all!
Organisations are also re-evaluating the cost benefits of Cloud. Investing in a Cloud infrastructure with a short-term view on the investments involved has led to instances of Cloud solutions being brought back in-house because of rising costs. While security, data privacy and integration remain the key challenges of Cloud adoption (Figure 2), over a third of the organisations find Cloud more expensive than traditional licensing or owning the hardware.
Organisations find that the cost considerations do not stop after the adoption or migration. As businesses use Cloud to scale, there are several aspects that require constant re-evaluation and often further investments – cybersecurity measures, continuous data protection (CDP), disaster recovery management, rightsizing capacity, software and database licenses and day-to-day maintenance, to name a few. In addition to this, the cost of finding and recruiting a team of professionals to manage and maintain the Cloud environment also adds up to the OpEx.
If the CIO is talking about a Cloud migration for cost benefits only, the CEO and the CFO need to step in to evaluate that all factors have been taken into consideration. Moreover, the CIO may not have full visibility of how and where the organisation is looking to scale up or down. It is the CEO’s responsibility to share that vision with the CIO to guide Cloud investments.
#3 Cloud will increasingly be part of all tech adoption considerations
In this disruptive world, CEOs should explore possibilities and understand the technical capabilities which can give organisations an edge over their competitors. It is then up to the CIOs to implement that vision with this larger context in mind. As organisations look to leverage emerging technologies, organisations will adopt Cloud to optimise their resources and workloads.
AI is changing the way organisations need to store, process and analyse the data to derive useful insights and decision-making practices. This is pushing the adoption of Cloud, even in the most conservative organisations. Cloud is no longer only required for infrastructure and back-up – but actually improving business processes, by enabling real-time data and systems access. Similarly, IoT devices will grow exponentially. Today, data is already going into the Cloud and data centres on a real-time basis from sensors and automated devices. However, as these devices become bi-directional, decisions will need to be made in real-time as well. Edge Computing will be essential in this intelligent and automated world. Cloud platform vendors are building on their edge solutions and tech buyers are increasingly getting interested in the Edge allowing better decision-making through machine learning and AI.
In view of the recent global crisis, we will see a sharp uptake of Cloud solutions across tech areas. IaaS will remain the key area of focus in the near future, especially Desktop-as-as-Service. Organisations will also look to evaluate more SaaS solutions, in order to empower a mobile and remote workforce. This will allow the workforce of the future to stay connected, informed and make more decisions. More than ever, CEOs have to drive business growth with innovative products and services – not understanding the capabilities and challenges of Cloud adoption and the advancements in the technology can be a serious handicap for CEOs.
#4 Your IT Team may be more complacent about Cloud security than you think
Another domain that requires the CEO’s attention is cybersecurity. The Cloud is used for computing operations and to store data including, intellectual property rights, financial information, employee details and other sensitive data. Cybersecurity breaches have immense financial and reputational implications and IT Teams cannot solely be responsible for it. Cybersecurity has become a Board-level conversation and many organisations are employing a Chief Information Security Officer (CISO) who reports directly into the CEO. Cybersecurity is an aspect of an organisation’s risk management program.
Evaluating the security features of the Cloud offerings, therefore, becomes an important aspect of an IT decision-maker’s job. While security remains a key concern when it comes to Cloud adoption, Cloud is often regarded as a more secure option than on-premise. Cloud providers have dedicated security focus, constantly upgrade their security capabilities in response to newer threats and evolve their partner ecosystem. There is also better traceability with the Cloud as every virtual activity can be tracked, monitored, and logged. Ecosystm research finds that more than 40% of IT decision-makers think the Public Cloud has enough security measures and does not need complementing (Figure 3).
However, the Cloud is as secure as an organisation makes it. The perception that there is no need to supplement Public Cloud security features can have disastrous outcomes. It is important to supplement the Cloud provider’s security with event-driven security measures within an organisation’s applications and cloud interface.
It is the job of the CEO – through the CISO – to evaluate how cyber ready the IT Team really is. Do they know enough about shared responsibility? Do they have full cognizance of the SLAs of their Cloud providers? Do they have sufficient internal cybersecurity skills? Do they understand that data breaches can have cost and reputational impacts? As cybersecurity breaches begin to have more financial implications than ever and can derail an organisation, a CEO should have visibility of the risks of the organisation’s Cloud adoption.
Cloud is no longer just a technological decision – it is a business decision and takes into account the organisation’s vision. A full visibility of the Cloud roadmap – including the pitfalls, the risks and the immense potential – will empower a CEO immensely.
For more insights from our Cloud Research, click below
Open Source has always played a big role in Cloud and as we move into 2020, its role will only grow bigger. Infrastructure and Platform Clouds may not have been dominated by Open Source in the past, but all major Cloud players such as AWS, Microsoft, and IBM have been focusing on Open Source recently.
In 2020, major cloud providers will switch to Open Source to create a development environment that can achieve more than they would be able to develop fully in-house.
Open Source will Drive the Adoption of Serverless Computing
Serverless computing is a cloud model where the Cloud provider runs and manages the server and the allocation of machine resources. Beyond just the individual servers, many vendors are now offering to totally replace the data centre with a virtual version that runs in the Cloud.
It has the potential to become a widely used solution for mid-range businesses. A dynamically scaled and priced data centre could offer them a much more flexible IT environment where they do not have to worry about capacity planning and scaling – even when compared to a more traditional Cloud environment.
Cloud and IoT Will Drive Edge Computing
Edge computing has been widely touted as a necessary component of a viable 5G setup, as it offers a more cost-effective and lower-latency set up than more traditional infrastructure. Also, with IoT being a major part of the business case behind 5G, the number of connected devices and endpoints is set to explode in the coming years, potentially overloading an infrastructure based fully on data centres for processing the data
Although some are touting Edge computing as the ultimate replacement of Cloud, we believe it will be complementary rather than a competing technology – at least in the foreseeable future and certainly as far as 2020 is concerned. Edge computing will allow Cloud providers to better cater to companies that need low latency, quick access to data and data processing. On the mobile side, it will allow them to push workloads to the handset, reducing the backend workload and potentially enhancing data privacy.
‘Cloud Creep’ Will Get Worse Before It Gets Better
What we have previously referred to as “shadow IT” is rapidly spreading – and worsening – as organisations move to the Cloud and many organisations are now suffering from what is referred to as “cloud creep”. There are several implications of cloud creep – the most significant ones being security issue and cost.
As the use of stand-alone Cloud services grows, the risk of sensitive data being used, stored and shared in non-compliant ways increases. As for cost, while LoB sourcing of Cloud services may save the strain on the IT department’s budget as the money may come from a different “bucket”, it makes it very difficult to get a true picture of the organisations total IT spend and it may mean that consolidation benefits from well-managed sourcing and usage cannot be achieved. A third and increasingly important factor is energy footprint and savings. A fall-out from cloud creep is increased “server sprawl” whereby virtual machines (VMs) and applications remain under-utilised, leading to poor productivity and proficiency.
Alliances and Partnerships Will be Formed as the Battle for the Top Heats Up
The global Cloud market has been consolidating around 5 players: AWS, Microsoft, Alibaba, Google and IBM. Nothing really suggests that this trend will change in 2020.
But even for the current top 5 players, their ability to compete will increasingly come down to their ability to expand their service capabilities beyond their current offerings. Ecosystm expects these players to further enhance their focus on expanding their services, management and integration capabilities through global and in-country partnerships. One particular area might be partnerships – focusing on Cloud migration between Clouds and from Cloud to on-premises.
Download Report: The Top 5 Cloud Trends For 2020
The full findings and implications of the report ‘Ecosystm Predicts: The Top 5 Cloud Trends For 2020’ are available for download from the Ecosystm website. Signup for Free to download the report and gain insight into ‘the top 5 Cloud trends for 2020’, implications for tech buyers, implications for tech vendors, insights, and more resources. Download Link Below ?
5/5 (3) Critical Communications World,2019 – TCCA’s largest event in global public safety communication – was held in Kuala Lumpur in June. Mission-critical communications are essential to maintaining safety and security across a range from daily operations to extreme events including disaster recovery. A UN report estimated that economic losses from natural disasters could reach USD 160 billion annually by 2030.
I attended the event as a guest of Motorola Solutions – one of the leaders in this field. Many people associate Motorola only with phones not knowing that they have been the cornerstone of some of the largest critical communications deployments around the globe. For instance, Victoria Police completed its AUD 50M+ rollout of Motorola Solutions managed services, enabling almost 10,000 police officers across Victoria access to mobile devices loaded with smart apps, and data when and where they need it most.
Motorola’s ability to provide customers with a private network which is secure, robust and redundant in the event of disaster has also been one of the reasons for their success in the industry. In the event of natural disasters or terrorist attacks, situations can arise where networks will not be available to send and transport any information. Having a secure and private network is critical. That explains why some of the largest police departments in Asia work with Motorola and these include Singapore, Malaysia and Indonesia.x
Motorola acquired Australian mobile application developer Gridstone in 2016 and Avigilon, an advanced video surveillance and analytics provider in 2018. These acquisitions demonstrate how Motorola is innovating in the areas of software, video analytics and AI.
Public Safety Moving to a Collaborative Platform with AI and Machine Learning
Andrew Sinclair, Global Software Chief for Motorola Solutions sees AI enhancing future command and control centres and allowing greater analytics of emergency calls. Call histories and transcriptions, the incident management stack, community engagement data and post incidence reporting are all important elements for command and control centres. Using AI to sieve through the information will empower the operator with the right data and to make the right on-the-spot decisions.
The Avigilon acquisition, enhances Motorola’s AI capabilities and less time is spent monitoring videos, giving first responders more time to do their jobs. The AI technology can make “sense” of the information by using natural language technology. For example, if asked to find a child in a red t-shirt, the cameras can detect the child and also create a fingerprint of the child. The solution enables faster incidence detection by using an edge computing platform. It gathers the information and processes it to relevant agencies making the search operation faster and more streamlined. The application of AI in the video monitoring space is still in its early days and the potential ahead for this technology is enormous.
The other area that can empower first responders better are voice activated devices. The popularity of Alexa and Echo in the consumer world will see greater innovation in the application of public safety solutions. For example, police officers responding to an emergency may have very little time to look at screens or attend to other applications that need touching or pressing of a button as time and attention is essential is such scenarios. The application of voice activated devices will be critical for easing the job of the police officer on the ground. This will not only save administrative work on activities such as transcription, but also help in creating better accounts of the actual happenings for potential court proceedings.
While it is still early days for a full-fledged AR deployment in public safety, there are potential use cases. For example, firemen standing outside a building to make sense of the surrounding area could use AR to send information back to the command and control centres.
The Growth of Cloud-driven Collaboration
Seng Heng Chuah, VP for Motorola APJ talked about the importance of all agencies in public safety to be more open and collaborative. For instance, currently most ambulance, police and fire departments work in silos and have their own apps and legacy systems. To achieve the Smart City or Safe City concept, collaborating and sharing information on one common platform will be key. He talked about the “Home Team” concept that the Singapore Government has achieved. Allowing all agencies to collaborate and share information will mean the ability to make faster decisions during a catastrophe. Making “sense” of the IoT, voice and video data will be important areas of innovation. Normally when a disaster happens, operators at command and control centres – as well as onsite staff – face elevated stress levels and accurate information can help alleviate that.
The move towards the public cloud is also becoming more relevant for agencies. In the past there was resistance and it was always about having the data on their own premises. In recent years more public safety agencies are embracing the cloud. When you have vast amounts of data from video, IoT devices and other data sources, it becomes expensive for public safety agencies to store the data on premise. Seng Heng talked about how public safety agencies are starting to “trust’’ the cloud more now. According to him, Microsoft has done a good job in working with local governments around the world, and their government clouds have many layers of certifications as well as a strong data centre footprint in countries. The collaboration between agencies and more importantly agencies embracing the cloud will drive greater efficiency in analysing, transcribing and storing the data.
The Rise of Outcome-based, Services-led Opportunities
Steve Crutchfield, VP of Motorola Solutions for ANZ, talked about how Motorola is a services-led business in the ANZ market. 45% of Motorola’s business in ANZ is comprised of managed services. The ANZ region is unique as it is seen as early adopters and innovators around public safety implementations. Organisations approach Motorola for the outcomes. Police and Ambulance for example in the state of Victoria use their services on a consumption model. Customers across Mining, Transportation, and Emergency Services want an end-to-end solution across the network, voice, video and analytics.
The need for a private and secure network is significant in several industries. In the mines, safety is of priority and as soon as the radio goes down it impacts productivity and when production stops that can results in huge losses for the mines. Hence the need for a reliable private network that is secure for the transportation of voice and video communication is critical.
Crutchfield talked about how the partner ecosystem is evolving with Motorola working with partners such as Telstra and Orion but increasingly looking for specialised line of business partners and data aggregation partners. Motorola works with 55 channel partners in the region.
Motorola Solutions is an established player in providing an end-to-end solution in the critical communications segment. The company is innovating in the areas of software and services coupled with the application of AI. Dr Mahesh Saptharishi, CTO at Motorola Solutions talked about how AI will eventually evolve into “muscle memory”. That will mean that there is far greater “automatic’’ intelligence in helping the first responders make critical decisions when faced with a tough situation.
In the end the efficacy of critical communications solutions will not just be the technology stack, but the desire and ability for cross-agency collaboration. As public safety agencies analyse large volumes of data sets from the network right to the applications, they will have to embrace the cloud, and which will help them achieve scale and security when storing information in the cloud. From the discussions, it was clear that the public safety agencies have started acknowledging the need to do so and we can expect that shift to happen soon.
Motorola will need to keep evolving their channel partner model and start partnering with new providers that can help in delivering some of the end-to-end capabilities across Mobility, AI, software, analytics and IoT. Many of their traditional partners may not be able to be that provider as the company evolves into driving end-to-end intelligent data services for their clients. The company is playing in a unique space with very few competitors that can offer the breadth and depth of critical communications solutions.
For example, Florida Virtual School is a full-time online school providing virtual K-12 education to students all over the world. It is a recognised eLearning school and provides custom solutions to meet students’ requirements. This model is being replicated globally especially in remote areas where an actual school premise may not be feasible or is too expensive.
Research & Experimentation
The remote handling of projects and experiments is enabling education institutions to overcome the challenge of carrying them out in a controlled and safe environment. ChemCollective, a project of the National Science Digital Library in the US, enables students to interact with a flexible learning environment in which students can access online chemistry labs to apply formulas, perform experiments and learn in realistic and engaging ways, like working scientists.
Open Education Resources
Cloud is enabling the development of open source content for schools and colleges. The challenge with the existing books and lectures is that they get dated. Cloud is enabling a wealth of content through open repositories and legal protocols to allow a community to collaborate and update the information. Open educational resources (OERs) are developed and can be modified by the creators and administrators. The community can contribute to maps, slides, worksheets, podcasts, syllabi or even textbooks. The copyright is associated via legal tools such as Creative Commons licenses, so others can freely access, reuse, translate, and modify them.
As textbooks and course material can now be updated in real-time and offered through a cloud-based subscription model, this now opens up new streams of revenue for publishers. However, this then raises the conversation that textbook prices are increasing while students have no option to purchase second-hand books or sell books once they are done with them.
Massive Open Online Courses (MOOCs) platforms both provide content to students in areas of personal interest and additional sources of revenue to renowned global institutions. A quick look at Coursera’s website shows online courses from reputed institutions such as MIT and Johns Hopkins University. There are still providers such as the Khan Academy that do not actively monetise the material they provide, but increasingly institutions look at MOOC to generate more revenues, by offering remote learning options to individuals, as well as by collaborating with local universities to make their courses available to overseas students – a previously untapped market.
Cloud computing is transforming the classroom and learning experiences the way educators, curriculum leads, and specialists recommend. The technology has a huge role to play in enabling transformation in Education – for national education systems, for educational institutions, and ultimately for the students.
How else do you think Cloud can transform the education industry? Let us know in your comments below.
A new high speed CPU-to-device interconnect standard, the Common Express Link (CXL) 1.0 was announced by Intel and a consortium of leading technology companies (Huawei and Cisco in the network infrastructure space, HPE and Dell EMC in the server hardware market, and Alibaba, Facebook, Google and Microsoft for the cloud services provider markets). CXL joins a crowded field of other standards already in the server link market including CAPI, NVLINK, GEN-Z and CCIX. CXL is being positioned to improve the performance of the links between FPGA and GPUs, the most common accelerators to be involved in ML-like workloads.
Of course there were some names that were absent from the launch – Arm, AMD, Nvidia, IBM, Amazon and Baidu. Each of them are members of the other standards bodies and probably are playing the waiting game.
Now let’s pause for a moment and look at the other announcement that happened at the same time. Nvidia and Mellanox announced that the two companies had reached a definitive agreement under which Nvidia will acquire Mellanox for $6.9 billion. Nvidia puts the acquisition reasons as “The data and compute intensity of modern workloads in AI, scientific computing and data analytics is growing exponentially and has put enormous performance demands on hyperscale and enterprise datacenters. While computing demand is surging, CPU performance advances are slowing as Moore’s law has ended. This has led to the adoption of accelerated computing with Nvidia GPUs and Mellanox’s intelligent networking solutions.”
So to me it seems that despite Intel working on CXL for four years, it looks like they might have been outbid by Nvidia for Mellanox. Mellanox has been around for 20 years and was the major supplier of Infiniband, a high speed interconnect that is common in high performance workloads and very well accepted by the HPC industry. (Note: Intel was also one of the founders of the Infiniband Trade Association, IBTA, before they opted to refocus on the PCI bus). With the growing need for fast links between the accelerators and the microprocessors, it would seem like Mellanox persistence had paid off and now has the market coming to it. One can’t help but think that as soon as Intel knew that Nvidia was getting Mellanox, it pushed forward with the CXL announcement – rumors that have had no response from any of the parties.
Advice for Tech Suppliers:
The two announcements are great for any vendor who is entering the AI, intense computing world using graphics and floating point arithmetic functions. We know that more digital-oriented solutions are asking for analytics based outcomes so there will be a growing demand for broader commoditized server platforms to support them. Tech suppliers should avoid backing or picking one of either the CXL or Infiniband at the moment until we see how the CXL standard evolves and how nVidia integrates Mellanox.
Advice for Tech Users:
These two announcements reflect innovation that is generally so far away from the end user, that it can go unnoticed. However, think about how USB (Universal Serial Bus) has changed the way we connect devices to our laptops, servers and other mobile devices. The same will true for this connection as more and more data is both read and outcomes generated by the ‘accelerators’ for the way we drive our cars, digitize our factories, run our hospitals, and search the Internet. Innovation in this space just got a shot in the arm from these two announcements.