Building a Data-Driven Foundation to Super Charge Your AI Journey

5/5 (1)

5/5 (1)

AI has become a business necessity today, catalysing innovation, efficiency, and growth by transforming extensive data into actionable insights, automating tasks, improving decision-making, boosting productivity, and enabling the creation of new products and services.

Generative AI stole the limelight in 2023 given its remarkable advancements and potential to automate various cognitive processes. However, now the real opportunity lies in leveraging this increased focus and attention to shine the AI lens on all business processes and capabilities. As organisations grasp the potential for productivity enhancements, accelerated operations, improved customer outcomes, and enhanced business performance, investment in AI capabilities is expected to surge.

In this eBook, Ecosystm VP Research Tim Sheedy and Vinod Bijlani and Aman Deep from HPE APAC share their insights on why it is crucial to establish tailored AI capabilities within the organisation.

AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook
AI-Powered Enterprise_HPE_Ecosystm_eBook-1
AI-Powered Enterprise_HPE_Ecosystm_eBook-2
AI-Powered Enterprise_HPE_Ecosystm_eBook-3
AI-Powered Enterprise_HPE_Ecosystm_eBook-4
AI-Powered Enterprise_HPE_Ecosystm_eBook-5
AI-Powered Enterprise_HPE_Ecosystm_eBook-6
AI-Powered Enterprise_HPE_Ecosystm_eBook-7
AI-Powered Enterprise_HPE_Ecosystm_eBook-8
AI-Powered Enterprise_HPE_Ecosystm_eBook-9
AI-Powered Enterprise_HPE_Ecosystm_eBook-10
AI-Powered Enterprise_HPE_Ecosystm_eBook-11
AI-Powered Enterprise_HPE_Ecosystm_eBook-12
previous arrowprevious arrow
next arrownext arrow
AI-Powered Enterprise_HPE_Ecosystm_eBook-1
AI-Powered Enterprise_HPE_Ecosystm_eBook-2
AI-Powered Enterprise_HPE_Ecosystm_eBook-3
AI-Powered Enterprise_HPE_Ecosystm_eBook-4
AI-Powered Enterprise_HPE_Ecosystm_eBook-5
AI-Powered Enterprise_HPE_Ecosystm_eBook-6
AI-Powered Enterprise_HPE_Ecosystm_eBook-7
AI-Powered Enterprise_HPE_Ecosystm_eBook-8
AI-Powered Enterprise_HPE_Ecosystm_eBook-9
AI-Powered Enterprise_HPE_Ecosystm_eBook-10
AI-Powered Enterprise_HPE_Ecosystm_eBook-11
AI-Powered Enterprise_HPE_Ecosystm_eBook-12
previous arrow
next arrow
Shadow

Click here to download the eBook “AI-Powered Enterprise: Building a Data Driven Foundation To Super Charge Your AI Journey”

AI Research and Reports
0
0
Shifting-Perspectives-Generative-AI's-Impact-on-Tech-Leaders
Shifting Perspectives: Generative AI’s Impact on Tech Leaders

No ratings yet.

No ratings yet.

Over the past year, many organisations have explored Generative AI and LLMs, with some successfully identifying, piloting, and integrating suitable use cases. As business leaders push tech teams to implement additional use cases, the repercussions on their roles will become more pronounced. Embracing GenAI will require a mindset reorientation, and tech leaders will see substantial impact across various ‘traditional’ domains.

AIOps and GenAI Synergy: Shaping the Future of IT Operations

When discussing AIOps adoption, there are commonly two responses: “Show me what you’ve got” or “We already have a team of Data Scientists building models”. The former usually demonstrates executive sponsorship without a specific business case, resulting in a lukewarm response to many pre-built AIOps solutions due to their lack of a defined business problem. On the other hand, organisations with dedicated Data Scientist teams face a different challenge. While these teams can create impressive models, they often face pushback from the business as the solutions may not often address operational or business needs. The challenge arises from Data Scientists’ limited understanding of the data, hindering the development of use cases that effectively align with business needs.

The most effective approach lies in adopting an AIOps Framework. Incorporating GenAI into AIOps frameworks can enhance their effectiveness, enabling improved automation, intelligent decision-making, and streamlined operational processes within IT operations.

This allows active business involvement in defining and validating use-cases, while enabling Data Scientists to focus on model building. It bridges the gap between technical expertise and business requirements, ensuring AIOps initiatives are influenced by the capabilities of GenAI, address specific operational challenges and resonate with the organisation’s goals.

The Next Frontier of IT Infrastructure

Many companies adopting GenAI are openly evaluating public cloud-based solutions like ChatGPT or Microsoft Copilot against on-premises alternatives, grappling with the trade-offs between scalability and convenience versus control and data security.

Cloud-based GenAI offers easy access to computing resources without substantial upfront investments. However, companies face challenges in relinquishing control over training data, potentially leading to inaccurate results or “AI hallucinations,” and concerns about exposing confidential data. On-premises GenAI solutions provide greater control, customisation, and enhanced data security, ensuring data privacy, but require significant hardware investments due to unexpectedly high GPU demands during both the training and inferencing stages of AI models.

Hardware companies are focusing on innovating and enhancing their offerings to meet the increasing demands of GenAI. The evolution and availability of powerful and scalable GPU-centric hardware solutions are essential for organisations to effectively adopt on-premises deployments, enabling them to access the necessary computational resources to fully unleash the potential of GenAI. Collaboration between hardware development and AI innovation is crucial for maximising the benefits of GenAI and ensuring that the hardware infrastructure can adequately support the computational demands required for widespread adoption across diverse industries. Innovations in hardware architecture, such as neuromorphic computing and quantum computing, hold promise in addressing the complex computing requirements of advanced AI models.

The synchronisation between hardware innovation and GenAI demands will require technology leaders to re-skill themselves on what they have done for years – infrastructure management.

The Rise of Event-Driven Designs in IT Architecture

IT leaders traditionally relied on three-tier architectures – presentation for user interface, application for logic and processing, and data for storage. Despite their structured approach, these architectures often lacked scalability and real-time responsiveness. The advent of microservices, containerisation, and serverless computing facilitated event-driven designs, enabling dynamic responses to real-time events, and enhancing agility and scalability. Event-driven designs, are a paradigm shift away from traditional approaches, decoupling components and using events as a central communication mechanism. User actions, system notifications, or data updates trigger actions across distributed services, adding flexibility to the system.

However, adopting event-driven designs presents challenges, particularly in higher transaction-driven workloads where the speed of serverless function calls can significantly impact architectural design. While serverless computing offers scalability and flexibility, the latency introduced by initiating and executing serverless functions may pose challenges for systems that demand rapid, real-time responses. Increasing reliance on event-driven architectures underscores the need for advancements in hardware and compute power. Transitioning from legacy architectures can also be complex and may require a phased approach, with cultural shifts demanding adjustments and comprehensive training initiatives.  

The shift to event-driven designs challenges IT Architects, whose traditional roles involved designing, planning, and overseeing complex systems. With Gen AI and automation enhancing design tasks, Architects will need to transition to more strategic and visionary roles. Gen AI showcases capabilities in pattern recognition, predictive analytics, and automated decision-making, promoting a symbiotic relationship with human expertise. This evolution doesn’t replace Architects but signifies a shift toward collaboration with AI-driven insights.

IT Architects need to evolve their skill set, blending technical expertise with strategic thinking and collaboration. This changing role will drive innovation, creating resilient, scalable, and responsive systems to meet the dynamic demands of the digital age.

Whether your organisation is evaluating or implementing GenAI, the need to upskill your tech team remains imperative. The evolution of AI technologies has disrupted the tech industry, impacting people in tech. Now is the opportune moment to acquire new skills and adapt tech roles to leverage the potential of GenAI rather than being disrupted by it.

More Insights to tech Buyer Guidance
0
0
Accelerate-AI-Adoption-Guardrails-for-Effective-Use
Accelerate AI Adoption: Guardrails for Effective Use

5/5 (2)

5/5 (2)

“AI Guardrails” are often used as a method to not only get AI programs on track, but also as a way to accelerate AI investments. Projects and programs that fall within the guardrails should be easy to approve, govern, and manage – whereas those outside of the guardrails require further review by a governance team or approval body. The concept of guardrails is familiar to many tech businesses and are often applied in areas such as cybersecurity, digital initiatives, data analytics, governance, and management.

While guidance on implementing guardrails is common, organisations often leave the task of defining their specifics, including their components and functionalities, to their AI and data teams. To assist with this, Ecosystm has surveyed some leading AI users among our customers to get their insights on the guardrails that can provide added value.

Data Security, Governance, and Bias

AI: Data, Security, and Bias
  • Data Assurance. Has the organisation implemented robust data collection and processing procedures to ensure data accuracy, completeness, and relevance for the purpose of the AI model? This includes addressing issues like missing values, inconsistencies, and outliers.
  • Bias Analysis. Does the organisation analyse training data for potential biases – demographic, cultural and so on – that could lead to unfair or discriminatory outputs?
  • Bias Mitigation. Is the organisation implementing techniques like debiasing algorithms and diverse data augmentation to mitigate bias in model training?
  • Data Security. Does the organisation use strong data security measures to protect sensitive information used in training and running AI models?
  • Privacy Compliance. Is the AI opportunity compliant with relevant data privacy regulations (country and industry-specific as well as international standards) when collecting, storing, and utilising data?

Model Development and Explainability

AI: Model Development and Explainability
  • Explainable AI. Does the model use explainable AI (XAI) techniques to understand and explain how AI models reach their decisions, fostering trust and transparency?
  • Fair Algorithms. Are algorithms and models designed with fairness in mind, considering factors like equal opportunity and non-discrimination?
  • Rigorous Testing. Does the organisation conduct thorough testing and validation of AI models before deployment, ensuring they perform as intended, are robust to unexpected inputs, and avoid generating harmful outputs?

AI Deployment and Monitoring

AI: Deployment and Monitoring
  • Oversight Accountability. Has the organisation established clear roles and responsibilities for human oversight throughout the AI lifecycle, ensuring human control over critical decisions and mitigation of potential harm?
  • Continuous Monitoring. Are there mechanisms to continuously monitor AI systems for performance, bias drift, and unintended consequences, addressing any issues promptly?
  • Robust Safety. Can the organisation ensure AI systems are robust and safe, able to handle errors or unexpected situations without causing harm? This includes thorough testing and validation of AI models under diverse conditions before deployment.
  • Transparency Disclosure. Is the organisation transparent with stakeholders about AI use, including its limitations, potential risks, and how decisions made by the system are reached?

Other AI Considerations

AI: Ethical Considerations
  • Ethical Guidelines. Has the organisation developed and adhered to ethical principles for AI development and use, considering areas like privacy, fairness, accountability, and transparency?
  • Legal Compliance. Has the organisation created mechanisms to stay updated on and compliant with relevant legal and regulatory frameworks governing AI development and deployment?
  • Public Engagement. What mechanisms are there in place to encourage open discussion and engage with the public regarding the use of AI, addressing concerns and building trust?
  • Social Responsibility. Has the organisation considered the environmental and social impact of AI systems, including energy consumption, ecological footprint, and potential societal consequences?

Implementing these guardrails requires a comprehensive approach that includes policy formulation, technical measures, and ongoing oversight. It might take a little longer to set up this capability, but in the mid to longer term, it will allow organisations to accelerate AI implementations and drive a culture of responsible AI use and deployment.

AI Research and Reports
0
0
How-Green-is-Your-Cloud
How Green is Your Cloud?

5/5 (1)

5/5 (1)

For many organisations migrating to cloud, the opportunity to run workloads from energy-efficient cloud data centres is a significant advantage. However, carbon emissions can vary from one country to another and if left unmonitored, will gradually increase over time as cloud use grows. This issue will become increasingly important as we move into the era of compute-intensive AI and the burden of cloud on natural resources will shift further into the spotlight.

The International Energy Agency (IEA) estimates that data centres are responsible for up to 1.5% of global electricity use and 1% of GHG emissions. Cloud providers have recognised this and are committed to change. Between 2025 and 2030, all hyperscalers – AWS, Azure, Google, and Oracle included – expect to power their global cloud operations entirely with renewable sources.

Chasing the Sun

Cloud providers are shifting their sights from simply matching electricity use with renewable power purchase agreements (PPA) to the more ambitious goal of operating 24/7 on carbon-free sources. A defining characteristic of renewables though is intermittency, with production levels fluctuating based on the availability of sunlight and wind. Leading cloud providers are using AI to dynamically distribute compute workloads throughout the day to regions with lower carbon intensity. Workloads that are processed with solar power during daylight can be shifted to nearby regions with abundant wind energy at night.

Addressing Water Scarcity

Many of the largest cloud data centres are situated in sunny locations to take advantage of solar power and proximity to population centres. Unfortunately, this often means that they are also in areas where water is scarce. While liquid-cooled facilities are energy efficient, local communities are concerned on the strain on water sources. Data centre operators are now committing to reduce consumption and restore water supplies. Simple measures, such as expanding humidity (below 20% RH) and temperature tolerances (above 30°C) in server rooms have helped companies like Meta to cut wastage. Similarly, Google has increased their reliance on non-potable sources, such as grey water and sea water.

From Waste to Worth

Data centre operators have identified innovative ways to reuse the excess heat generated by their computing equipment. Some have used it to heat adjacent swimming pools while others have warmed rooms that house vertical farms. Although these initiatives currently have little impact on the environmental impact of cloud, they suggest a future where waste is significantly reduced.

Greening the Grid

The giant facilities that cloud providers use to house their computing infrastructure are also set to change. Building materials and construction account for an astonishing 11% of global carbon emissions. The use of recycled materials in concrete and investing in greener methods of manufacturing steel are approaches the construction industry are attempting to lessen their impact. Smaller data centres have been 3D printed to accelerate construction and use recyclable printing concrete. While this approach may not be suitable for hyperscale facilities, it holds potential for smaller edge locations.

Rethinking Hardware Management

Cloud providers rely on their scale to provide fast, resilient, and cost-effective computing. In many cases, simply replacing malfunctioning or obsolete equipment would achieve these goals better than performing maintenance. However, the relentless growth of e-waste is putting pressure on cloud providers to participate in the circular economy. Microsoft, for example, has launched three Circular Centres to repurpose cloud equipment. During the pilot of their Amsterdam centre, it achieved 83% reuse and 17% recycling of critical parts. The lifecycle of equipment in the cloud is largely hidden but environmentally conscious users will start demanding greater transparency.

Recommendations

Organisations should be aware of their cloud-derived scope 3 emissions and consider broader environmental issues around water use and recycling. Here are the steps that can be taken immediately:

  1. Monitor GreenOps. Cloud providers are adding GreenOps tools, such as the AWS Customer Carbon Footprint Tool, to help organisations measure the environmental impact of their cloud operations. Understanding the relationship between cloud use and emissions is the first step towards sustainable cloud operations.
  2. Adopt Cloud FinOps for Quick ROI. Eliminating wasted cloud resources not only cuts costs but also reduces electricity-related emissions. Tools such as CloudVerse provide visibility into cloud spend, identifies unused instances, and helps to optimise cloud operations.
  3. Take a Holistic View. Cloud providers are being forced to improve transparency and reduce their environmental impact by their biggest customers. Getting educated on the actions that cloud partners are taking to minimise emissions, water use, and waste to landfill is crucial. In most cases, dedicated cloud providers should reduce waste rather than offset it.
  4. Enable Remote Workforce. Cloud-enabled security and networking solutions, such as SASE, allow employees to work securely from remote locations and reduce their transportation emissions. With a SASE deployed in the cloud, routine management tasks can be performed by IT remotely rather than at the branch, further reducing transportation emissions.
Get your Free Copy
0
0
Beyond-Reality-The-Rise-of-Deepfakes
Beyond Reality: The Rise of Deepfakes

4.8/5 (4)

4.8/5 (4)

In the Ecosystm Predicts: Building an Agile & Resilient Organisation: Top 5 Trends in 2024​, Principal Advisor Darian Bird said, “The emergence of Generative AI combined with the maturing of deepfake technology will make it possible for malicious agents to create personalised voice and video attacks.” Darian highlighted that this democratisation of phishing, facilitated by professional-sounding prose in various languages and tones, poses a significant threat to potential victims who rely on misspellings or oddly worded appeals to detect fraud. As we see more of these attacks and social engineering attempts, it is important to improve defence mechanisms and increase awareness. 

Understanding Deepfake Technology 

The term Deepfake is a combination of the words ‘deep learning’ and ‘fake’. Deepfakes are AI-generated media, typically in the form of images, videos, or audio recordings. These synthetic content pieces are designed to appear genuine, often leading to the manipulation of faces and voices in a highly realistic manner. Deepfake technology has gained spotlight due to its potential for creating convincing yet fraudulent content that blurs the line of reality. 

Deepfake algorithms are powered by Generative Adversarial Networks (GANs) and continuously enhance synthetic content to closely resemble real data. Through iterative training on extensive datasets, these algorithms refine features such as facial expressions and voice inflections, ensuring a seamless emulation of authentic characteristics.  

Deepfakes Becoming Increasingly Convincing 

Hyper-realistic deepfakes, undetectable to the human eye and ear, have become a huge threat to the financial and technology sectors. Deepfake technology has become highly convincing, blurring the line between real and fake content. One of the early examples of a successful deepfake fraud was when a UK-based energy company lost USD 243k through a deepfake audio scam in 2019, where scammers mimicked the voice of their CEO to authorise an illegal fund transfer.  

Deepfakes have evolved from audio simulations to highly convincing video manipulations where faces and expressions are altered in real-time, making it hard to distinguish between real and fake content. In 2022, for instance, a deepfake video of Elon Musk was used in a crypto scam that resulted in a loss of about USD 2 million for US consumers. This year, a multinational company in Hong Kong lost over USD 25 million when an employee was tricked into sending money to fraudulent accounts after a deepfake video call by what appeared to be his colleagues. 

Regulatory Responses to Deepfakes 

Countries worldwide are responding to the challenges posed by deepfake technology through regulations and awareness campaigns. 

  • Singapore’s Online Criminal Harms Act, that will come into effect in 2024, will empower authorities to order individuals and Internet service providers to remove or block criminal content, including deepfakes used for malicious purposes.  
  • The UAE National Programme for Artificial Intelligence released a deepfake guide to educate the public about both harmful and beneficial applications of this technology. The guide categorises fake content into shallow and deep fakes, providing methods to detect deepfakes using AI-based tools, with a focus on promoting positive uses of advanced technologies. 
  • The proposed EU AI Act aims to regulate them by imposing transparency requirements on creators, mandating them to disclose when content has been artificially generated or manipulated. 
  • South Korea passed a law in 2020 banning the distribution of harmful deepfakes. Offenders could be sentenced to up to five years in prison or fined up to USD 43k. 
  • In the US, states like California and Virginia have passed laws against deepfake pornography, while federal bills like the DEEP FAKES Accountability Act aim to mandate disclosure and counter malicious use, highlighting the diverse global efforts to address the multifaceted challenges of deepfake regulation. 

Detecting and Protecting Against Deepfakes 

Detecting deepfake becomes increasingly challenging as technology advances. Several methods are needed – sometimes in conjunction – to be able to detect a convincing deepfake. These include visual inspection that focuses on anomalies, metadata analysis to examine clues about authenticity, forensic analysis for pattern and audio examination, and machine learning that uses algorithms trained on real and fake video datasets to classify new videos.  

However, identifying deepfakes requires sophisticated technology that many organisations may not have access to. This heightens the need for robust cybersecurity measures. Deepfakes have seen an increase in convincing and successful phishing – and spear phishing – attacks and cyber leaders need to double down on cyber practices.  

Defences can no longer depend on spotting these attacks. It requires a multi-pronged approach which combines cyber technologies, incidence response, and user education.  

Preventing access to users. By employing anti-spoofing measures organisations can safeguard their email addresses from exploitation by fraudulent actors. Simultaneously, minimising access to readily available information, particularly on websites and social media, reduces the chance of spear-phishing attempts. This includes educating employees about the implications of sharing personal information and clear digital footprint policies. Implementing email filtering mechanisms, whether at the server or device level, helps intercept suspicious emails; and the filtering rules need to be constantly evaluated using techniques such as IP filtering and attachment analysis.  

Employee awareness and reporting. There are many ways that organisations can increase awareness in employees starting from regular training sessions to attack simulations. The usefulness of these sessions is often questioned as sometimes they are merely aimed at ticking off a compliance box. Security leaders should aim to make it easier for employees to recognise these attacks by familiarising them with standard processes and implementing verification measures for important email requests. This should be strengthened by a culture of reporting without any individual blame. 

Securing against malware. Malware is often distributed through these attacks, making it crucial to ensure devices are well-configured and equipped with effective endpoint defences to prevent malware installation, even if users inadvertently click on suspicious links. Specific defences may include disabling macros and limiting administrator privileges to prevent accidental malware installation. Strengthening authentication and authorisation processes is also important, with measures such as multi-factor authentication, password managers, and alternative authentication methods like biometrics or smart cards. Zero trust and least privilege policies help protect organisation data and assets.   

Detection and Response. A robust security logging system is crucial, either through off-the shelf monitoring tools, managed services, or dedicated teams for monitoring. What is more important is that the monitoring capabilities are regularly updated. Additionally, having a well-defined incident response can swiftly mitigate post-incident harm post-incident. This requires clear procedures for various incident types and designated personnel for executing them, such as initiating password resets or removing malware. Organisations should ensure that users are informed about reporting procedures, considering potential communication challenges in the event of device compromise. 

Conclusion 

The rise of deepfakes has brought forward the need for a collaborative approach. Policymakers, technology companies, and the public must work together to address the challenges posed by deepfakes. This collaboration is crucial for making better detection technologies, establishing stronger laws, and raising awareness on media literacy. 

The Resilient Enterprise
0
0
Understanding-the-Difference-Between-Predictive-and-Generative-AI
Understanding the Difference Between Predictive and Generative AI

4.7/5 (3)

4.7/5 (3)

In my last Ecosystm Insights, I spoke about the implications of the shift from Predictive AI to Generative AI on ROI considerations of AI deployments. However, from my discussions with colleagues and technology leaders it became clear that there is a need to define and distinguish between Predictive AI and Generative AI better.

Predictive AI analyses historical data to predict future outcomes, crucial for informed decision-making and strategic planning. Generative AI unlocks new avenues for innovation by creating novel data and content. Organisations need both – Predictive AI for enhancing operational efficiencies and forecasting capabilities and Generative AI to drive innovation; create new products, services, and experiences; and solve complex problems in unprecedented ways. 

This guide aims to demystify these categories, providing clarity on their differences, applications, and examples of the algorithms they use. 

Predictive AI: Forecasting the Future

Predictive AI is extensively used in fields such as finance, marketing, healthcare and more. The core idea is to identify patterns or trends in data that can inform future decisions. Predictive AI relies on statistical, machine learning, and deep learning models to forecast outcomes. 

Key Algorithms in Predictive AI 

  • Regression Analysis. Linear and logistic regression are foundational tools for predicting a continuous or categorical outcome based on one or more predictor variables. 
  • Decision Trees. These models use a tree-like graph of decisions and their possible consequences, including chance event outcomes, resource costs and utility. 
  • Random Forest (RF). An ensemble learning method that operates by constructing a multitude of decision trees at training time to improve predictive accuracy and control over-fitting. 
  • Gradient Boosting Machines (GBM). Another ensemble technique that builds models sequentially, each new model correcting errors made by the previous ones, used for both regression and classification tasks. 
  • Support Vector Machines (SVM). A supervised machine learning model that uses classification algorithms for two-group classification problems. 

Generative AI: Creating New Data

Generative AI, on the other hand, focuses on generating new data that is similar but not identical to the data it has been trained on. This can include anything from images, text, and videos to synthetic data for training other AI models. GenAI is particularly known for its ability to innovate, create, and simulate in ways that predictive AI cannot. 

Key Algorithms in Generative AI 

  • Generative Adversarial Networks (GANs). Comprising two networks – a generator and a discriminator – GANs are trained to generate new data with the same statistics as the training set. 
  • Variational Autoencoders (VAEs). These are generative algorithms that use neural networks for encoding inputs into a latent space representation, then reconstructing the input data based on this representation. 
  • Transformer Models. Originally designed for natural language processing (NLP) tasks, transformers can be adapted for generative purposes, as seen in models like GPT (Generative Pre-trained Transformer), which can generate coherent and contextually relevant text based on a given prompt. 

Comparing Predictive and Generative AI

The fundamental difference between the two lies in their primary objectives: Predictive AI aims to forecast future outcomes based on past data, while Generative AI aims to create new, original data that mimics the input data in some form. 

The differences become clearer when we look at these examples.  

Predictive AI Examples  

  • Supply Chain Management. Analyses historical supply chain data to forecast demand, manage inventory levels, reduces costs and improve delivery times.  
  • Healthcare. Analysing patient records to predict disease outbreaks or the likelihood of a disease in individual patients. 
  • Predictive Maintenance. Analyse historical and real-time data and preemptively identifies system failures or network issues, enhancing infrastructure reliability and operational efficiency. 
  • Finance. Using historical stock prices and indicators to predict future market trends. 

Generative AI Examples  

  • Content Creation. Generating realistic images or art from textual descriptions using GANs. 
  • Text Generation. Creating coherent and contextually relevant articles, stories, or conversational responses using transformer models like GPT-3. 
  • Chatbots and Virtual Assistants. Advanced GenAI models are enhancing chatbots and virtual assistants, making them more realistic. 
  • Automated Code Generation. By the use of natural language descriptions to generate programming code and scripts, to significantly speed up software development processes. 

Conclusion 

Organisations that exclusively focus on Generative AI may find themselves at the forefront of innovation, by leveraging its ability to create new content, simulate scenarios, and generate original data. However, solely relying on Generative AI without integrating Predictive AI’s capabilities may limit an organisation’s ability to make data-driven decisions and forecasts based on historical data. This could lead to missed opportunities to optimise operations, mitigate risks, and accurately plan for future trends and demands. Predictive AI’s strength lies in analysing past and present data to inform strategic decision-making, crucial for long-term sustainability and operational efficiency. 

It is essential for companies to adopt a dual-strategy approach in their AI efforts. Together, these AI paradigms can significantly amplify an organisation’s ability to adapt, innovate, and compete in rapidly changing markets. 

AI Research and Reports
0
0
Prepare-for-an-Explosion-in-IT-Services-Spend
Prepare for an Explosion in IT Services Spend

5/5 (2)

5/5 (2)

2024 and 2025 are looking good for IT services providers – particularly in Asia Pacific. All types of providers – from IT consultants to managed services VARs and systems integrators – will benefit from a few converging events.

However, amidst increasing demand, service providers are also challenged with cost control measures imposed in organisations – and this is heightened by the challenge of finding and retaining their best people as competition for skills intensifies. Providers that service mid-market clients might find it hard to compete and grow without significant process automation to compensate for the higher employee costs.

Why Organisations are Opting for IT Service

Choosing the Right Cost Model for IT Services

Buyers of IT services must implement strict cost-control measures and consider various approaches to align costs with business and customer outcomes, including different cost models:

Fixed-Price Contracts. These contracts set a firm price for the entire project or specific deliverables. Ideal when project scope is clear, they offer budget certainty upfront but demand detailed specifications, potentially leading to higher initial quotes due to the provider assuming more risk.

Time and Materials (T&M) Contracts with Caps. Payment is based on actual time and materials used, with negotiated caps to prevent budget overruns. Combining flexibility with cost predictability, this model offers some control over total expenses.

Performance-Based Pricing. Fees are tied to service provider performance, incentivising achievement of specific KPIs or milestones. This aligns provider interests with client goals, potentially resulting in cost savings and improved service quality.

Retainer Agreements with Scope Limits. Recurring fees are paid for ongoing services, with defined limits on work scope or hours within a given period. This arrangement ensures resource availability while containing expenses, particularly suitable for ongoing support services.

Other Strategies for Cost Efficiency and Effective Management

Technology leaders should also consider implementing some of the following strategies:

Phased Payments. Structuring payments in phases, tied to the completion of project milestones, helps manage cash flow and provides a financial incentive for the service provider to meet deadlines and deliverables. It also allows for regular financial reviews and adjustments if the project scope changes.

Cost Transparency and Itemisation. Detailed billing that itemises the costs of labour, materials, and other expenses provides transparency to verify charges, track spending against the budget, and identify areas for potential savings.

Volume Discounts and Negotiated Rates. Negotiating volume discounts or preferential rates for long-term or large-scale engagements, makes providers to offer reduced rates for a commitment to a certain volume of work or an extended contract duration.

Utilisation of Shared Services or Cloud Solutions. Opting for shared or cloud-based solutions where feasible, offers economies of scale and reduces the need for expensive, dedicated infrastructure and resources.

Regular Review and Adjustment. Conducting regular reviews of the services and expenses with the provider to ensure alignment with the budget and objectives, prepares organisations to adjust the scope, renegotiate terms, or implement cost-saving measures as needed.

Exit Strategy. Planning an exit strategy that include provisions for contract termination, transition services, protects an organisation in case the partnership needs to be dissolved.

Conclusion

Many businesses swing between insourcing and outsourcing technology capabilities – with the recent trend moving towards insourcing development and outsourcing infrastructure to the public cloud. But 2024 will see demand for all types of IT services across nearly every geography and industry. Tech services providers can bring significant value to your business – but improved management, monitoring, and governance will ensure that this value is delivered at a fair cost.

More Insights to tech Buyer Guidance
0
0
Ecosystm VendorSphere: Microsoft’s AI Vision – Initiatives & Impact

5/5 (2)

5/5 (2)

As tech providers such as Microsoft enhance their capabilities and products, they will impact business processes and technology skills, and influence other tech providers to reshape their product and service offerings. Microsoft recently organised briefing sessions in Sydney and Singapore, to present their future roadmap, with a focus on their AI capabilities.

Ecosystm Advisors Achim Granzen, Peter Carr, and Tim Sheedy provide insights on Microsoft’s recent announcements and messaging.

Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact
Microsoft-AI-Vision-Initiatives-Impact-1
Microsoft-AI-Vision-Initiatives-Impact-2
Microsoft-AI-Vision-Initiatives-Impact-3
Microsoft-AI-Vision-Initiatives-Impact-4
Microsoft-AI-Vision-Initiatives-Impact-5
Microsoft-AI-Vision-Initiatives-Impact-6
Microsoft-AI-Vision-Initiatives-Impact-7
Microsoft-AI-Vision-Initiatives-Impact-8
Microsoft-AI-Vision-Initiatives-Impact-9
previous arrowprevious arrow
next arrownext arrow
Microsoft-AI-Vision-Initiatives-Impact-1
Microsoft-AI-Vision-Initiatives-Impact-2
Microsoft-AI-Vision-Initiatives-Impact-3
Microsoft-AI-Vision-Initiatives-Impact-4
Microsoft-AI-Vision-Initiatives-Impact-5
Microsoft-AI-Vision-Initiatives-Impact-6
Microsoft-AI-Vision-Initiatives-Impact-7
Microsoft-AI-Vision-Initiatives-Impact-8
Microsoft-AI-Vision-Initiatives-Impact-9
previous arrow
next arrow
Shadow

Click here to download Ecosystm VendorSphere: Microsoft’s AI Vision – Initiatives & Impact

Ecosystm Question: What are your thoughts on Microsoft Copilot?

Tim Sheedy. The future of GenAI will not be about single LLMs getting bigger and better – it will be about the use of multiple large and small language models working together to solve specific challenges. It is wasteful to use a large and complex LLM to solve a problem that is simpler. Getting these models to work together will be key to solving industry and use case specific business and customer challenges in the future. Microsoft is already doing this with Microsoft 365 Copilot.​

Achim Granzen. Microsoft’s Copilot – a shrink-wrapped GenAI tool based on OpenAI – has become a mainstream product. Microsoft has made it available to their enterprise clients in multiple ways: for personal productivity in Microsoft 365, for enterprise applications in Dynamics 365, for developers in Github and Copilot Studio, and to partners to integrate Copilot into their applications suites (E.g. Amdocs’ Customer Engagement Platform).​

Ecosystm Question: How, in your opinion, is the Microsoft Copilot a game changer?

Microsoft’s Customer Copyright Commitment, initially launched as Copilot Copyright Commitment, is the true game changer. 

Achim Granzen. It safeguards Copilot users from potential copyright infringement lawsuits related to data used for algorithm training or output results. In November 2023, Microsoft expanded its scope to cover commercial usage of their OpenAI interface as well. ​

This move not only protects commercial clients using Microsoft’s GenAI products but also extends to any GenAI solutions built by their clients. This initiative significantly reduces a key risk associated with GenAI adoption, outlined in the product terms and conditions.​

However, compliance with a set of Required Mitigations and Codes of Conduct is necessary for clients to benefit from this commitment, aligning with responsible AI guidelines and best practices. ​

Ecosystm Question: Where will organisations need most help on their AI journeys?

Peter Carr. Unfortunately, there is no playbook for AI. ​

  • The path to integrating AI into business strategies and operations lacks a one-size-fits-all guide. Organisations will have to navigate uncharted territories for the time being. This means experimenting with AI applications and learning from successes and failures. This exploratory approach is crucial for leveraging AI’s potential while adapting to unique organisational challenges and opportunities. So, companies that are better at agile innovation will do better in the short term. ​
  • The effectiveness of AI is deeply tied to the availability and quality of connected data. AI systems require extensive datasets to learn and make informed decisions. Ensuring data is accessible, clean, and integrated is fundamental for AI to accurately analyse trends, predict outcomes, and drive intelligent automation across various applications. ​

Ecosystm Question: What advice ​would you give organisations adopting AI?

Tim Sheedy. ​It is all about opportunities and responsibility.​

  • There is a strong need for responsible AI – at a global level, at a country level, at an industry level and at an organisational level. Microsoft (and other AI leaders) are helping to create responsible AI systems that are fair, reliable, safe, private, secure, and inclusive. There is still a long way to go, but these capabilities do not completely indemnify users of AI. They still have a responsibility to set guardrails in their own businesses about the use and opportunities for AI.​
  • AI and hybrid work are often discussed as different trends in the market, with different solution sets. But in reality, they are deeply linked. AI can help enhance and improve hybrid work in businesses – and is a great opportunity to demonstrate the value of AI and tools such as Copilot. ​

​Ecosystm Question: What should Microsoft focus on? 

Tim Sheedy. Microsoft faces a challenge in educating the market about adopting AI, especially Copilot. They need to educate business, IT, and AI users on embracing AI effectively. Additionally, they must educate existing partners and find new AI partners to drive change in their client base. Success in the race for knowledge workers requires not only being first but also helping users maximise solutions. Customers have limited visibility of Copilot’s capabilities, today. Improving customer upskilling and enhancing tools to prompt users to leverage capabilities will contribute to Microsoft’s (or their competitors’) success in dominating the AI tool market.​​

Peter Carr. Grassroots businesses form the economic foundation of the Asia Pacific economies. Typically, these businesses do not engage with global SIs (GSIs), which drive Microsoft’s new service offerings. This leads to an adoption gap in the sector that could benefit most from operational efficiencies. To bridge this gap, Microsoft must empower non-GSI partners and managed service providers (MSPs) at the local and regional levels. They won’t achieve their goal of democratising AI, unless they do. Microsoft has the potential to advance AI technology while ensuring fair and widespread adoption.​​

More Insights to tech Buyer Guidance
0
0