Shifting Perspectives: Generative AI’s Impact on Tech Leaders

5/5 (1)

5/5 (1)

Over the past year, many organisations have explored Generative AI and LLMs, with some successfully identifying, piloting, and integrating suitable use cases. As business leaders push tech teams to implement additional use cases, the repercussions on their roles will become more pronounced. Embracing GenAI will require a mindset reorientation, and tech leaders will see substantial impact across various ‘traditional’ domains.

AIOps and GenAI Synergy: Shaping the Future of IT Operations

When discussing AIOps adoption, there are commonly two responses: “Show me what you’ve got” or “We already have a team of Data Scientists building models”. The former usually demonstrates executive sponsorship without a specific business case, resulting in a lukewarm response to many pre-built AIOps solutions due to their lack of a defined business problem. On the other hand, organisations with dedicated Data Scientist teams face a different challenge. While these teams can create impressive models, they often face pushback from the business as the solutions may not often address operational or business needs. The challenge arises from Data Scientists’ limited understanding of the data, hindering the development of use cases that effectively align with business needs.

The most effective approach lies in adopting an AIOps Framework. Incorporating GenAI into AIOps frameworks can enhance their effectiveness, enabling improved automation, intelligent decision-making, and streamlined operational processes within IT operations.

This allows active business involvement in defining and validating use-cases, while enabling Data Scientists to focus on model building. It bridges the gap between technical expertise and business requirements, ensuring AIOps initiatives are influenced by the capabilities of GenAI, address specific operational challenges and resonate with the organisation’s goals.

The Next Frontier of IT Infrastructure

Many companies adopting GenAI are openly evaluating public cloud-based solutions like ChatGPT or Microsoft Copilot against on-premises alternatives, grappling with the trade-offs between scalability and convenience versus control and data security.

Cloud-based GenAI offers easy access to computing resources without substantial upfront investments. However, companies face challenges in relinquishing control over training data, potentially leading to inaccurate results or “AI hallucinations,” and concerns about exposing confidential data. On-premises GenAI solutions provide greater control, customisation, and enhanced data security, ensuring data privacy, but require significant hardware investments due to unexpectedly high GPU demands during both the training and inferencing stages of AI models.

Hardware companies are focusing on innovating and enhancing their offerings to meet the increasing demands of GenAI. The evolution and availability of powerful and scalable GPU-centric hardware solutions are essential for organisations to effectively adopt on-premises deployments, enabling them to access the necessary computational resources to fully unleash the potential of GenAI. Collaboration between hardware development and AI innovation is crucial for maximising the benefits of GenAI and ensuring that the hardware infrastructure can adequately support the computational demands required for widespread adoption across diverse industries. Innovations in hardware architecture, such as neuromorphic computing and quantum computing, hold promise in addressing the complex computing requirements of advanced AI models.

The synchronisation between hardware innovation and GenAI demands will require technology leaders to re-skill themselves on what they have done for years – infrastructure management.

The Rise of Event-Driven Designs in IT Architecture

IT leaders traditionally relied on three-tier architectures – presentation for user interface, application for logic and processing, and data for storage. Despite their structured approach, these architectures often lacked scalability and real-time responsiveness. The advent of microservices, containerisation, and serverless computing facilitated event-driven designs, enabling dynamic responses to real-time events, and enhancing agility and scalability. Event-driven designs, are a paradigm shift away from traditional approaches, decoupling components and using events as a central communication mechanism. User actions, system notifications, or data updates trigger actions across distributed services, adding flexibility to the system.

However, adopting event-driven designs presents challenges, particularly in higher transaction-driven workloads where the speed of serverless function calls can significantly impact architectural design. While serverless computing offers scalability and flexibility, the latency introduced by initiating and executing serverless functions may pose challenges for systems that demand rapid, real-time responses. Increasing reliance on event-driven architectures underscores the need for advancements in hardware and compute power. Transitioning from legacy architectures can also be complex and may require a phased approach, with cultural shifts demanding adjustments and comprehensive training initiatives.  

The shift to event-driven designs challenges IT Architects, whose traditional roles involved designing, planning, and overseeing complex systems. With Gen AI and automation enhancing design tasks, Architects will need to transition to more strategic and visionary roles. Gen AI showcases capabilities in pattern recognition, predictive analytics, and automated decision-making, promoting a symbiotic relationship with human expertise. This evolution doesn’t replace Architects but signifies a shift toward collaboration with AI-driven insights.

IT Architects need to evolve their skill set, blending technical expertise with strategic thinking and collaboration. This changing role will drive innovation, creating resilient, scalable, and responsive systems to meet the dynamic demands of the digital age.

Whether your organisation is evaluating or implementing GenAI, the need to upskill your tech team remains imperative. The evolution of AI technologies has disrupted the tech industry, impacting people in tech. Now is the opportune moment to acquire new skills and adapt tech roles to leverage the potential of GenAI rather than being disrupted by it.

More Insights to tech Buyer Guidance
0
AI Legislations Gain Traction: What Does it Mean for AI Risk Management?

5/5 (3)

5/5 (3)

It’s been barely one year since we entered the Generative AI Age. On November 30, 2022, OpenAI launched ChatGPT, with no fanfare or promotion. Since then, Generative AI has become arguably the most talked-about tech topic, both in terms of opportunities it may bring and risks that it may carry.

The landslide success of ChatGPT and other Generative AI applications with consumers and businesses has put a renewed and strengthened focus on the potential risks associated with the technology – and how best to regulate and manage these. Government bodies and agencies have created voluntary guidelines for the use of AI for a number of years now (the Singapore Framework, for example, was launched in 2019).

There is no active legislation on the development and use of AI yet. Crucially, however, a number of such initiatives are currently on their way through legislative processes globally.

EU’s Landmark AI Act: A Step Towards Global AI Regulation

The European Union’s “Artificial Intelligence Act” is a leading example. The European Commission (EC) started examining AI legislation in 2020 with a focus on

  • Protecting consumers
  • Safeguarding fundamental rights, and
  • Avoiding unlawful discrimination or bias

The EC published an initial legislative proposal in 2021, and the European Parliament adopted a revised version as their official position on AI in June 2023, moving the legislation process to its final phase.

This proposed EU AI Act takes a risk management approach to regulating AI. Organisations looking to employ AI must take note: an internal risk management approach to deploying AI would essentially be mandated by the Act. It is likely that other legislative initiatives will follow a similar approach, making the AI Act a potential role model for global legislations (following the trail blazed by the General Data Protection Regulation). The “G7 Hiroshima AI Process”, established at the G7 summit in Japan in May 2023, is a key example of international discussion and collaboration on the topic (with a focus on Generative AI).

Risk Classification and Regulations in the EU AI Act

At the heart of the AI Act is a system to assess the risk level of AI technology, classify the technology (or its use case), and prescribe appropriate regulations to each risk class.

Risk levels of proposed EU AI Act

For each of these four risk levels, the AI Act proposes a set of rules and regulations. Evidently, the regulatory focus is on High-Risk AI systems.

Four risk levels of the AI Act

Contrasting Approaches: EU AI Act vs. UK’s Pro-Innovation Regulatory Approach

The AI Act has received its share of criticism, and somewhat different approaches are being considered, notably in the UK. One set of criticism revolves around the lack of clarity and vagueness of concepts (particularly around person-related data and systems). Another set of criticism revolves around the strong focus on the protection of rights and individuals and highlights the potential negative economic impact for EU organisations looking to leverage AI, and for EU tech companies developing AI systems.

A white paper by the UK government published in March 2023, perhaps tellingly, named “A pro-innovation approach to AI regulation” emphasises on a “pragmatic, proportionate regulatory approach … to provide a clear, pro-innovation regulatory environment”, The paper talks about an approach aiming to balance the protection of individuals with economic advancements for the UK on its way to become an “AI superpower”.

Further aspects of the EU AI Act are currently being critically discussed. For example, the current text exempts all open-source AI components not part of a medium or higher risk system from regulation but lacks definition and considerations for proliferation.

Adopting AI Risk Management in Organisations: The Singapore Approach

Regardless of how exactly AI regulations will turn out around the world, organisations must start today to adopt AI risk management practices. There is an added complexity: while the EU AI Act does clearly identify high-risk AI systems and example use cases, the realisation of regulatory practices must be tackled with an industry-focused approach.

The approach taken by the Monetary Authority of Singapore (MAS) is a primary example of an industry-focused approach to AI risk management. The Veritas Consortium, led by MAS, is a public-private-tech partnership consortium aiming to guide the financial services sector on the responsible use of AI. As there is no AI legislation in Singapore to date, the consortium currently builds on Singapore’s aforementioned “Model Artificial Intelligence Governance Framework”. Additional initiatives are already underway to focus specifically on Generative AI for financial services, and to build a globally aligned framework.

To Comply with Upcoming AI Regulations, Risk Management is the Path Forward

As AI regulation initiatives move from voluntary recommendation to legislation globally, a risk management approach is at the core of all of them. Adding risk management capabilities for AI is the path forward for organisations looking to deploy AI-enhanced solutions and applications. As that task can be daunting, an industry consortium approach can help circumnavigate challenges and align on implementation and realisation strategies for AI risk management across the industry. Until AI legislations are in place, such industry consortia can chart the way for their industry – organisations should seek to participate now to gain a head start with AI.

Get your Free Copy
0
AI Will be the “Next Big Thing” in End-User Computing

5/5 (3)

5/5 (3)

I have spent many years analysing the mobile and end-user computing markets. Going all the way back to 1995 where I was part of a Desktop PC research team, to running the European wireless and mobile comms practice, to my time at 3 Mobile in Australia and many years after, helping clients with their end-user computing strategies. From the birth of mobile data services (GPRS, WAP, and so on to 3G, 4G and 5G), from simple phones to powerful foldable devices, from desktop computers to a complex array of mobile computing devices to meet the many and varied employee needs. I am always looking for the “next big thing” – and there have been some significant milestones – Palm devices, Blackberries, the iPhone, Android, foldables, wearables, smaller, thinner, faster, more powerful laptops.  

But over the past few years, innovation in this space has tailed off. Outside of the foldable space (which is already four years old), the major benefits of new devices are faster processors, brighter screens, and better cameras. I review a lot of great computers too (like many of the recent Surface devices) – and while they are continuously improving, not much has got my clients or me “excited” over the past few years (outside of some of the very cool accessibility initiatives). 

The Force of AI 

But this is all about to change. Devices are going to get smarter based on their data ecosystem, the cloud, and AI-specific local processing power. To be honest, this has been happening for some time – but most of the “magic” has been invisible to us. It happened when cameras took multiple shots and selected the best one; it happened when pixels were sharpened and images got brighter, better, and more attractive; it happened when digital assistants were called upon to answer questions and provide context.  

Microsoft, among others, are about to make AI smarts more front and centre of the experience – Windows Copilot will add a smart assistant that can not only advise but execute on advice. It will help employees improve their focus and productivity, summarise documents and long chat threads, select music, distribute content to the right audience, and find connections. Added to Microsoft 365 Copilot it will help knowledge workers spend less time searching and reading – and more time doing and improving.  

The greater integration of public and personal data with “intent insights” will also play out on our mobile devices. We are likely to see the emergence of the much-promised “integrated app”– one that can take on many of the tasks that we currently undertake across multiple applications, mobile websites, and sometimes even multiple devices. This will initially be through the use of public LLMs like Bard and ChatGPT, but as more custom, private models emerge they will serve very specific functions. 

Focused AI Chips will Drive New Device Wars 

In parallel to these developments, we expect the emergence of very specific AI processors that are paired to very specific AI capabilities. As local processing power becomes a necessity for some AI algorithms, the broad CPUs – and even the AI-focused ones (like Google’s Tensor Processor) – will need to be complemented by specific chips that serve specific AI functions. These chips will perform the processing more efficiently – preserving the battery and improving the user experience.  

While this will be a longer-term trend, it is likely to significantly change the game for what can be achieved locally on a device – enabling capabilities that are not in the realm of imagination today. They will also spur a new wave of device competition and innovation – with a greater desire to be on the “latest and greatest” devices than we see today! 

So, while the levels of device innovation have flattened, AI-driven software and chipset innovation will see current and future devices enable new levels of employee productivity and consumer capability. The focus in 2023 and beyond needs to be less on the hardware announcements and more on the platforms and tools. End-user computing strategies need to be refreshed with a new perspective around intent and intelligence. The persona-based strategies of the past have to be changed in a world where form factors and processing power are less relevant than outcomes and insights. 

AI Research and Reports
0
Navigating the Financial Frontier: Point Zero Forum 2023

5/5 (1)

5/5 (1)

After the resounding success of the inaugural event last year, Ecosystm is once again partnering with Elevandi and the State Secretariat for International Finance SIF as a knowledge partner for the Point Zero Forum 2023. In this Ecosystm Insights, our guest author Jaskaran Bhalla, Content Lead, Elevandi talks about the Point Zero Forum 2023 and how it is all set to explore digital assets, sustainability, and AI in an ever-evolving Financial Services landscape.

The Point Zero Forum is returning for its second edition between 26 to 28 June 2023 in Zurich, Switzerland. The inaugural Forum held in June 2022 attracted over 1,000 leaders and featured more than 200 esteemed speakers from Europe, Asia Pacific, the USA, and MENA. The Forum represents a collaboration between the Swiss State Secretariat for International Finance (SIF) and Elevandi and is organised in cooperation with the BIS Innovation Hub, the Monetary Authority of Singapore (MAS), and the Swiss National Bank.

As we gear up for this year’s Point Zero Forum, let’s take a moment to reflect on some of the pivotal developments that have shaped the Financial Services industry since the previous Forum and also moulded the three key themes that will take centre stage this year: Sustainability, Artificial Intelligence (AI), and Digital Assets.

COP27, the rise of blended finance and the groundbreaking Net-Zero Public Data Utility

In November 2022, the Government of the Arab Republic of Egypt hosted the 27th session of the Conference of the Parties of the UNFCCC (COP27), with a view to accelerate the transition to a low-carbon future. In the build-up to COP27, Ravi Menon, the Managing Director of the MAS spoke at the inaugural Transition Finance towards Net-Zero conference and shared with the audience that the world is currently not on a trajectory to achieve net-zero emissions by 2050. And according to the UN Emissions Gap report 2021, based on the current policies in place, the world is 55% short of the emissions reduction target for 2030. He also elaborated on the significant role that blended finance can play in tackling climate change, a theme that widely resonated with the global leaders at COP27. To enable easy and transparent reporting on climate commitments, the Climate Data Steering Committee (CDSC) outlined the next steps on its recommended plans for the Net-Zero Data Public Utility (NZDPU) at COP 27. NZDPU aims to aid efforts to transition to a net-zero economy by addressing data gaps, inconsistencies, and barriers to information that slow climate action.

The Point Zero Forum 2023 will deep-dive into the data, technologies, and capital and risk management solutions that can accelerate the fair transition towards a low-carbon future.

Panel Discussion Highlight: The opening panel discussion, “Data for Net-Zero: Views from the Climate Data Steering Committee,” scheduled for 26 June, will feature members of the CDSC, which include the Financial Conduct Authority, the MAS, Glasgow Financial Alliance for Net Zero (GFANZ), and the Swiss State Secretariat for International Finance. The panel will discuss the role of new technologies and collaborative platforms in promoting greater accessibility of transition data and innovative business models.

The launch of ChatGPT by OpenAI and its record for the fastest 100M monthly active users

The launch of ChatGPT by OpenAI on 30 November, 2022 led to widespread adoption by users globally – eventually setting the record for the fastest-growing, active users, hitting 100M monthly active users by Feb 2023. While on one hand users rushed to share enormous efficiency gains achieved by the use of ChatGPT, on the other hand ChatGPT soon became a disruptive tool to spread fake news.

The Point Zero Forum 2023 will deep-dive into Generative AI’s potential for enhancing efficiency, improving risk management, and providing better customer experience in the Financial Services industry, while highlighting the need for ensuring fair, ethical, accountable, and transparent use of these technologies.

Panel Discussion Highlight: The session “Breaking New Ground with Generative AI: Project MindForge”, scheduled for 27 June, will feature global leaders from NVIDIA, the MAS, Citigroup and Bloomberg. The panel will discuss the opportunities of Generative AI for the Financial Services sector.

MiCA regulation gets adopted by the EU lawmakers and sets a precedent for digital asset regulations

More than 2.5 years after it was first proposed, the EU Markets in Crypto-Assets (MiCA) regulation was approved in April 2023 by EU Parliament. While there is still work to be done to implement MiCA and measure its success, and to answer open questions around regulation for out-of-scope assets (like DeFI and NFTs), the digital assets industry is keenly observing whether MiCA could serve as a template for global crypto regulation. In May 2023, the International Organization Of Securities Commissions (IOSCO), the global standard setter for securities markets, also joined the global discussion on digital asset regulation by issuing for consultation detailed recommendations to jurisdictions across the globe as to how to regulate crypto assets.

The Point Zero Forum 2023 will do a stocktake on key global regulatory frameworks, market infrastructure, and use cases for the widespread adoption of digital assets, asset tokenisation, and distributed ledger technology.

Panel Discussion Highlight: The sessions “State of Global Digital Asset Regulation: Navigating Opportunities in an Evolving Landscape” and “Interoperability and Regulatory Compliance: Building the Future of Digital Asset Infrastructure”, scheduled on 26 and 27 June respectively, will feature global leaders from both public sector (such as the MAS, Bank of Italy, Bank of Thailand, U.S. Commodity Futures Trading Commission, EU Parliament) and private sector organisations (such as JP Morgan, Sygnum, SBI Digital Assets, Chainalysis, GBBC, SIX Digital Exchange). The discussions will centre around digital asset regulations and key considerations in the rapidly evolving world of digital assets.

Point Zero Forum - Registration

Register here at https://www.pointzeroforum.com/registration. Receive 10% off the Industry Pass by entering the code ‘JB10’ at check out. (Policymakers, regulators, think tanks, and academics receive complimentary access/ Founders of tech companies (incorporated for less than 3 years) can apply for a discounted Founder’s Pass)

0
Your Organisation Needs an AI Ethics Policy TODAY!

5/5 (2)

5/5 (2)

It is not hyperbole to state that AI is on the cusp of having significant implications on society, business, economies, governments, individuals, cultures, politics, the arts, manufacturing, customer experience… I think you get the idea! We cannot understate the impact that AI will have on society. In times gone by, businesses tested ideas, new products, or services with small customer segments before they went live. But with AI we are all part of this experiment on the impacts of AI on society – its benefits, use cases, weaknesses, and threats. 

What seemed preposterous just six months ago is not only possible but EASY! Do you want a virtual version of yourself, a friend, your CEO, or your deceased family member? Sure – just feed the data. Will succession planning be more about recording all conversations and interactions with an executive so their avatar can make the decisions when they leave? Why not? How about you turn the thousands of hours of recorded customer conversations with your contact centre team into a virtual contact centre team? Your head of product can present in multiple countries in multiple languages, tailored to the customer segments, industries, geographies, or business needs at the same moment.  

AI has the potential to create digital clones of your employees, it can spread fake news as easily as real news, it can be used for deception as easily as for benefit. Is your organisation prepared for the social, personal, cultural, and emotional impacts of AI? Do you know how AI will evolve in your organisation?  

When we focus on the future of AI, we often interview AI leaders, business leaders, futurists, and analysts. I haven’t seen enough focus on psychologists, sociologists, historians, academics, counselors, or even regulators! The Internet and social media changed the world more than we ever imagined – at this stage, it looks like these two were just a rehearsal for the real show – Artificial Intelligence. 

Lack of Government or Industry Regulation Means You Need to Self-Regulate 

These rapid developments – and the notable silence from governments, lawmakers, and regulators – make the requirement for an AI Ethics Policy for your organisation urgent! Even if you have one, it probably needs updating, as the scenarios that AI can operate within are growing and changing literally every day.  

  • For example, your customer service team might want to create a virtual customer service agent from a real person. What is the policy on this? How will it impact the person? 
  • Your marketing team might be using ChatGPT or Bard for content creation. Do you have a policy specifically for the creation and use of content using assets your business does not own?  
  • What data is acceptable to be ingested by a public Large Language Model (LLM). Are are you governing data at creation and publishing to ensure these policies are met?  
  • With the impending public launch of Microsoft’s Co-Pilot AI service, what data can be ingested by Co-Pilot? How are you governing the distribution of the insights that come out of that capability? 

If policies are not put in place, data tagged, staff trained, before using a tool such as Co-Pilot, your business will be likely to break some privacy or employment laws – on the very first day! 

What do the LLMs Say About AI Ethics Policies? 

So where do you go when looking for an AI Ethics policy? ChatGPT and Bard of course! I asked the two for a modern AI Ethics policy. 

You can read what they generated in the graphic below.

YourOrganisationNeedsanAIEthicsPolicyTODAY-1
YourOrganisationNeedsanAIEthicsPolicyTODAY-2
YourOrganisationNeedsanAIEthicsPolicyTODAY-3
previous arrowprevious arrow
next arrownext arrow
YourOrganisationNeedsanAIEthicsPolicyTODAY-1
YourOrganisationNeedsanAIEthicsPolicyTODAY-2
YourOrganisationNeedsanAIEthicsPolicyTODAY-3
previous arrow
next arrow
Shadow

I personally prefer the ChatGPT4 version as it is more prescriptive. At the same time, I would argue that MOST of the AI tools that your business has access to today don’t meet all of these principles. And while they are tools and the ethics should dictate the way the tools are used, with AI you cannot always separate the process and outcome from the tool.  

For example, a tool that is inherently designed to learn an employee’s character, style, or mannerisms cannot be unbiased if it is based on a biased opinion (and humans have biases!).  

LLMs take data, content, and insights created by others, and give it to their customers to reuse. Are you happy with your website being used as a tool to train a startup on the opportunities in the markets and customers you serve?  

By making content public, you acknowledge the risk of others using it. But at least they visited your website or app to consume it. Not anymore… 

A Policy is Useless if it Sits on a Shelf 

Your AI ethics policy needs to be more than a published document. It should be the beginning of a conversation across the entire organisation about the use of AI. Your employees need to be trained in the policy. It needs to be part of the culture of the business – particularly as low and no-code capabilities push these AI tools, practices, and capabilities into the hands of many of your employees.  

Nearly every business leader I interview mentions that their organisation is an “intelligent, data-led, business.” What is the role of AI in driving this intelligent business? If being data-driven and analytical is in the DNA of your organisation, soon AI will also be at the heart of your business. You might think you can delay your investments to get it right – but your competitors may be ahead of you.  

So, as you jump head-first into the AI pool, start to create, improve and/or socialise your AI Ethics Policy. It should guide your investments, protect your brand, empower your employees, and keep your business resilient and compliant with legacy and new legislation and regulations. 

AI Research and Reports
0
Google’s AI-Powered Code Generator Takes on GitHub Copilot

5/5 (1)

5/5 (1)

Google recently extended its Generative AI, Bard, to include coding in more than 20 programming languages, including C++, Go, Java, Javascript, and Python. The search giant has been eager to respond to last year’s launch of ChatGPT but as the trusted incumbent, it has naturally been hesitant to move too quickly. The tendency for large language models (LLMs) to produce controversial and erroneous outputs has the potential to tarnish established brands. Google Bard was released in March in the US and the UK as an LLM but lacked the coding ability of OpenAI’s ChatGPT and Microsoft’s Bing Chat.

Bard’s new features include code generation, optimisation, debugging, and explanation. Using natural language processing (NLP), users can explain their requirements to the AI and ask it to generate code that can then be exported to an integrated development environment (IDE) or executed directly in the browser with Google Colab. Similarly, users can request Bard to debug already existing code, explain code snippets, or optimise code to improve performance.

Google continues to refer to Bard as an experiment and highlights that as is the case with generated text, code produced by the AI may not function as expected. Regardless, the new functionality will be useful for both beginner and experienced developers. Those learning to code can use Generative AI to debug and explain their mistakes or write simple programs. More experienced developers can use the tool to perform lower-value work, such as commenting on code, or scaffolding to identify potential problems.

GitHub Copilot X to Face Competition

While the ability for Bard, Bing, and ChatGPT to generate code is one of their most important use cases, developers are now demanding AI directly in their IDEs.

In March, Microsoft made one of its most significant announcements of the year when it demonstrated GitHub Copilot X, which embeds GPT-4 in the development environment. Earlier this year, Microsoft invested $10 billion into OpenAI to add to the $1 billion from 2019, cementing the partnership between the two AI heavyweights. Among other benefits, this agreement makes Azure the exclusive cloud provider to OpenAI and provides Microsoft with the opportunity to enhance its software with AI co-pilots.

Currently, under technical preview, when Copilot X eventually launches, it will integrate into Visual Studio — Microsoft’s IDE. Presented as a sidebar or chat directly in the IDE, Copilot X will be able to generate, explain, and comment on code, debug, write unit tests, and identify vulnerabilities. The “Hey, GitHub” functionality will allow users to chat using voice, suitable for mobile users or more natural interaction on a desktop.

Not to be outdone by its cloud rivals, in April, AWS announced the general availability of what it describes as a real-time AI coding companion. Amazon CodeWhisperer, integrates with a range of IDEs, namely Visual Studio Code, IntelliJ IDEA, CLion, GoLand, WebStorm, Rider, PhpStorm, PyCharm, RubyMine, and DataGrip, or natively in AWS Cloud9 and AWS Lambda console. While the preview worked for Python, Java, JavaScript, TypeScript, and C#, the general release extends support for most languages. Amazon’s key differentiation is that it is available for free to individual users, while GitHub Copilot is currently subscription-based with exceptions only for teachers, students, and maintainers of open-source projects.

The Next Step: Generative AI in Security

The next battleground for Generative AI will be assisting overworked security analysts. Currently, some of the greatest challenges that Security Operations Centres (SOCs) face are being understaffed and overwhelmed with the number of alerts. Security vendors, such as IBM and Securonix, have already deployed automation to reduce alert noise and help analysts prioritise tasks to avoid responding to false threats.

Google recently introduced Sec-PaLM and Microsoft announced Security Copilot, bringing the power of Generative AI to the SOC. These tools will help analysts interact conversationally with their threat management systems and will explain alerts in natural language. How effective these tools will be is yet to be seen, considering hallucinations in security is far riskier than writing an essay with ChatGPT.

The Future of AI Code Generators

Although GitHub Copilot and Amazon CodeWhisperer had already launched with limited feature sets, it was the release of ChatGPT last year that ushered in a new era in AI code generation. There is now a race between the cloud hyperscalers to win over developers and to provide AI that supports other functions, such as security.

Despite fears that AI will replace humans, in their current state it is more likely that they will be used as tools to augment developers. Although AI and automated testing reduce the burden on the already stretched workforce, humans will continue to be in demand to ensure code is secure and satisfies requirements. A likely scenario is that with coding becoming simpler, rather than the number of developers shrinking, the volume and quality of code written will increase. AI will generate a new wave of citizen developers able to work on projects that would previously have been impossible to start.  This may, in turn, increase demand for developers to build on these proofs-of-concept.

How the Generative AI landscape evolves over the next year will be interesting. In a recent interview, OpenAI’s founder, Sam Altman, explained that the non-profit model it initially pursued is not feasible, necessitating the launch of a capped-for-profit subsidiary. The company retains its values, however, focusing on advancing AI responsibly and transparently with public consultation. The appearance of Microsoft, Google, and AWS will undoubtedly change the market dynamics and may force OpenAI to at least reconsider its approach once again.

The Future of Industries
0
Moving into the AI Era – Microsoft Increases Investment in OpenAI

5/5 (3)

5/5 (3)

Microsoft’s intention to invest a further USD 10B in OpenAI – the owner of ChatGPT and Dall-E2 confirms what we said in the Ecosystm Predicts – Cloud will be replaced by AI as the right transformation goal. Microsoft has already invested an estimated USD 3B in the company since 2019. Let’s take a look at what this means to the tech industry.

Implications for OpenAI & Microsoft

OpenAI’s tools – such as ChatGPT and the image engine Dell-E2 – require significant processing power to operate, particularly as they move beyond beta programs and offer services at scale. In a single week in December, the company moved past 1 million users for ChatGPT alone. The company must be burning through cash at a significant rate. This means they need significant funding to keep the lights on, particularly as the capability of the product continues to improve and the amount of data, images and content it trawls continues to expand. ChatGPT is being talked about as one of the most revolutionary tech capabilities of the decade – but it will be all for nothing if the company doesn’t have the resources to continue to operate!

This is huge for Microsoft! Much has already been discussed about the opportunity for Microsoft to compete with Google more effectively for search-related advertising dollars. But every product and service that Microsoft develops can be enriched and improved by ChatGPT:

  • A spreadsheet tool that automatically categorises data and extract insight
  • A word processing tool that creates content automatically
  • A CRM that creates custom offers for every individual customer based on their current circumstances
  • A collaboration tool that gets answers to questions before they are even asked and acts on the insights and analytics that it needs to drive the right customer and business outcomes
  • A presentation tool that creates slides with compelling storylines based on the needs of specific audiences
  • LinkedIn providing the insights users need to achieve their outcomes
  • A cloud-based AI engine that can be embedded into any process or application through a simple API call (this already exists!)

How Microsoft chooses to monetise these opportunities is up to the company – but the investment certainly puts Microsoft in the box seat to monetise the AI services through their own products while also taking a cut from other ways that OpenAI monetises their services.

Impact on Microsoft’s competitors

Microsoft’s investment in OpenAI will accelerate the rate of AI development and adoption. As we move into the AI era, everything will change. New business opportunities will emerge, and traditional ones will disappear. Markets will be created and destroyed. Microsoft’s investment is an attempt for the company to end up on the right side of this equation. But the other existing (and yet to be created) AI businesses won’t just give up. The Microsoft investment will create a greater urgency for Google, Apple, and others to accelerate their AI capabilities and investments. And we will see investments in OpenAI’s competitors, such as Stability AI (which raised USD 101M in October 2022).

What will change for enterprises?

Too many businesses have put “the cloud” at the centre of their transformation strategies – as if being in the cloud is an achievement in itself. While cloud made applications and processes are easier to transform (and sometimes cheaper to deploy and run), for many businesses, they have just modernised their legacy end-to-end business processes on a better platform. True transformation happens when businesses realise that their processes only existed because they of lack of human or technology capacity to treat every customer and employee as an individual, to determine their specific needs and to deliver a custom solution for them. Not to mention the huge cost of creating unique processes for every customer! But AI does this.

AI engines have the ability to make businesses completely rethink their entire application stack. They have the ability to deliver unique outcomes for every customer. Businesses need to have AI as their transformation goal – where they put intelligence at the centre of every transformation, they will make different decisions and drive better customer and business outcomes. But once again, delivering this will take significant processing power and access to huge amounts of content and data.

The Burning Question: Who owns the outcome of AI?

In the end, ChatGPT only knows what it knows – and the content that it learns from is likely to have been created by someone (ideally – as we don’t want AI to learn from bad AI!). What we don’t really understand is the unintended consequences of commercialising AI. Will content creators be less willing to share their content? Will we see the emergence of many more walled content gardens? Will blockchain and even NFTs emerge as a way of protecting and proving origin? Will legislation protect content creators or AI engines? If everyone is using AI to create content, will all content start to look more similar (as this will be the stage that the AI is learning from content created by AI)? And perhaps the biggest question of all – where does the human stop and the machine start?

These questions will need answers and they are not going to be answered in advance. Whatever the answers might be, we are definitely at the beginning of the next big shift in human-technology relations. Microsoft wants to accelerate this shift. As a technology analyst, 2023 just got a lot more interesting!

The Future of AI
0