Shifting Perspectives: Generative AI’s Impact on Tech Leaders

5/5 (1)

5/5 (1)

Over the past year, many organisations have explored Generative AI and LLMs, with some successfully identifying, piloting, and integrating suitable use cases. As business leaders push tech teams to implement additional use cases, the repercussions on their roles will become more pronounced. Embracing GenAI will require a mindset reorientation, and tech leaders will see substantial impact across various ‘traditional’ domains.

AIOps and GenAI Synergy: Shaping the Future of IT Operations

When discussing AIOps adoption, there are commonly two responses: “Show me what you’ve got” or “We already have a team of Data Scientists building models”. The former usually demonstrates executive sponsorship without a specific business case, resulting in a lukewarm response to many pre-built AIOps solutions due to their lack of a defined business problem. On the other hand, organisations with dedicated Data Scientist teams face a different challenge. While these teams can create impressive models, they often face pushback from the business as the solutions may not often address operational or business needs. The challenge arises from Data Scientists’ limited understanding of the data, hindering the development of use cases that effectively align with business needs.

The most effective approach lies in adopting an AIOps Framework. Incorporating GenAI into AIOps frameworks can enhance their effectiveness, enabling improved automation, intelligent decision-making, and streamlined operational processes within IT operations.

This allows active business involvement in defining and validating use-cases, while enabling Data Scientists to focus on model building. It bridges the gap between technical expertise and business requirements, ensuring AIOps initiatives are influenced by the capabilities of GenAI, address specific operational challenges and resonate with the organisation’s goals.

The Next Frontier of IT Infrastructure

Many companies adopting GenAI are openly evaluating public cloud-based solutions like ChatGPT or Microsoft Copilot against on-premises alternatives, grappling with the trade-offs between scalability and convenience versus control and data security.

Cloud-based GenAI offers easy access to computing resources without substantial upfront investments. However, companies face challenges in relinquishing control over training data, potentially leading to inaccurate results or “AI hallucinations,” and concerns about exposing confidential data. On-premises GenAI solutions provide greater control, customisation, and enhanced data security, ensuring data privacy, but require significant hardware investments due to unexpectedly high GPU demands during both the training and inferencing stages of AI models.

Hardware companies are focusing on innovating and enhancing their offerings to meet the increasing demands of GenAI. The evolution and availability of powerful and scalable GPU-centric hardware solutions are essential for organisations to effectively adopt on-premises deployments, enabling them to access the necessary computational resources to fully unleash the potential of GenAI. Collaboration between hardware development and AI innovation is crucial for maximising the benefits of GenAI and ensuring that the hardware infrastructure can adequately support the computational demands required for widespread adoption across diverse industries. Innovations in hardware architecture, such as neuromorphic computing and quantum computing, hold promise in addressing the complex computing requirements of advanced AI models.

The synchronisation between hardware innovation and GenAI demands will require technology leaders to re-skill themselves on what they have done for years – infrastructure management.

The Rise of Event-Driven Designs in IT Architecture

IT leaders traditionally relied on three-tier architectures – presentation for user interface, application for logic and processing, and data for storage. Despite their structured approach, these architectures often lacked scalability and real-time responsiveness. The advent of microservices, containerisation, and serverless computing facilitated event-driven designs, enabling dynamic responses to real-time events, and enhancing agility and scalability. Event-driven designs, are a paradigm shift away from traditional approaches, decoupling components and using events as a central communication mechanism. User actions, system notifications, or data updates trigger actions across distributed services, adding flexibility to the system.

However, adopting event-driven designs presents challenges, particularly in higher transaction-driven workloads where the speed of serverless function calls can significantly impact architectural design. While serverless computing offers scalability and flexibility, the latency introduced by initiating and executing serverless functions may pose challenges for systems that demand rapid, real-time responses. Increasing reliance on event-driven architectures underscores the need for advancements in hardware and compute power. Transitioning from legacy architectures can also be complex and may require a phased approach, with cultural shifts demanding adjustments and comprehensive training initiatives.  

The shift to event-driven designs challenges IT Architects, whose traditional roles involved designing, planning, and overseeing complex systems. With Gen AI and automation enhancing design tasks, Architects will need to transition to more strategic and visionary roles. Gen AI showcases capabilities in pattern recognition, predictive analytics, and automated decision-making, promoting a symbiotic relationship with human expertise. This evolution doesn’t replace Architects but signifies a shift toward collaboration with AI-driven insights.

IT Architects need to evolve their skill set, blending technical expertise with strategic thinking and collaboration. This changing role will drive innovation, creating resilient, scalable, and responsive systems to meet the dynamic demands of the digital age.

Whether your organisation is evaluating or implementing GenAI, the need to upskill your tech team remains imperative. The evolution of AI technologies has disrupted the tech industry, impacting people in tech. Now is the opportune moment to acquire new skills and adapt tech roles to leverage the potential of GenAI rather than being disrupted by it.

More Insights to tech Buyer Guidance
0
Accelerate AI Adoption: Guardrails for Effective Use

5/5 (3)

5/5 (3)

“AI Guardrails” are often used as a method to not only get AI programs on track, but also as a way to accelerate AI investments. Projects and programs that fall within the guardrails should be easy to approve, govern, and manage – whereas those outside of the guardrails require further review by a governance team or approval body. The concept of guardrails is familiar to many tech businesses and are often applied in areas such as cybersecurity, digital initiatives, data analytics, governance, and management.

While guidance on implementing guardrails is common, organisations often leave the task of defining their specifics, including their components and functionalities, to their AI and data teams. To assist with this, Ecosystm has surveyed some leading AI users among our customers to get their insights on the guardrails that can provide added value.

Data Security, Governance, and Bias

AI: Data, Security, and Bias
  • Data Assurance. Has the organisation implemented robust data collection and processing procedures to ensure data accuracy, completeness, and relevance for the purpose of the AI model? This includes addressing issues like missing values, inconsistencies, and outliers.
  • Bias Analysis. Does the organisation analyse training data for potential biases – demographic, cultural and so on – that could lead to unfair or discriminatory outputs?
  • Bias Mitigation. Is the organisation implementing techniques like debiasing algorithms and diverse data augmentation to mitigate bias in model training?
  • Data Security. Does the organisation use strong data security measures to protect sensitive information used in training and running AI models?
  • Privacy Compliance. Is the AI opportunity compliant with relevant data privacy regulations (country and industry-specific as well as international standards) when collecting, storing, and utilising data?

Model Development and Explainability

AI: Model Development and Explainability
  • Explainable AI. Does the model use explainable AI (XAI) techniques to understand and explain how AI models reach their decisions, fostering trust and transparency?
  • Fair Algorithms. Are algorithms and models designed with fairness in mind, considering factors like equal opportunity and non-discrimination?
  • Rigorous Testing. Does the organisation conduct thorough testing and validation of AI models before deployment, ensuring they perform as intended, are robust to unexpected inputs, and avoid generating harmful outputs?

AI Deployment and Monitoring

AI: Deployment and Monitoring
  • Oversight Accountability. Has the organisation established clear roles and responsibilities for human oversight throughout the AI lifecycle, ensuring human control over critical decisions and mitigation of potential harm?
  • Continuous Monitoring. Are there mechanisms to continuously monitor AI systems for performance, bias drift, and unintended consequences, addressing any issues promptly?
  • Robust Safety. Can the organisation ensure AI systems are robust and safe, able to handle errors or unexpected situations without causing harm? This includes thorough testing and validation of AI models under diverse conditions before deployment.
  • Transparency Disclosure. Is the organisation transparent with stakeholders about AI use, including its limitations, potential risks, and how decisions made by the system are reached?

Other AI Considerations

AI: Ethical Considerations
  • Ethical Guidelines. Has the organisation developed and adhered to ethical principles for AI development and use, considering areas like privacy, fairness, accountability, and transparency?
  • Legal Compliance. Has the organisation created mechanisms to stay updated on and compliant with relevant legal and regulatory frameworks governing AI development and deployment?
  • Public Engagement. What mechanisms are there in place to encourage open discussion and engage with the public regarding the use of AI, addressing concerns and building trust?
  • Social Responsibility. Has the organisation considered the environmental and social impact of AI systems, including energy consumption, ecological footprint, and potential societal consequences?

Implementing these guardrails requires a comprehensive approach that includes policy formulation, technical measures, and ongoing oversight. It might take a little longer to set up this capability, but in the mid to longer term, it will allow organisations to accelerate AI implementations and drive a culture of responsible AI use and deployment.

AI Research and Reports
0
Preparing Your Organisation Against Cyber Attacks

5/5 (3)

5/5 (3) Last week, the Australia Government announced that they have been monitoring persistent and increasing volumes of cyber-attacks by a foreign state-based actor on both government and private sector businesses. The Australian Cyber Security Centre (ACSC) reported that most of the attacks make use of existing open-source tools and packages, which ACSC has dubbed as “copy-paste compromises”. The attackers are also using other methods to exploit such as spear phishing, sending malicious files and using various websites to harvest passwords and more, to exploit systems.
Cybercrime has been escalating in other parts of the world as well. The World Health Organisation (WHO) witnessed a dramatic increase in cyber-attacks directed with scammers impersonating WHO personnel’s official emails targeting the public. The National Cyber Security Centre (NCSC) in the UK alerted the country’s educational institutions and scientific facilities on increased cyber-attacks attempting to steal research associated with the coronavirus. Earlier this month, the Singapore Computer Emergency Response Team (SingCERT) issued an advisory on potential phishing campaigns targeting six countries, including Singapore that exploit government support initiatives for businesses and individuals in the wake of the COVID-19 crisis.
Such announcements are a timely reminder to government agencies and private organisations to implement the right cybersecurity measures against the backdrop of an increased attack surface. These cyber attacks can have business impacts such as theft of business data and destruction or impairment to financial data, creating extended business interruptions. The ramifications can be far-reaching including financial and reputational loss, compliance breaches and potentially even legal action.

A Rise in Spear-Phishing

In Australia, we’re seeing attackers targeting internet-facing infrastructure relating to vulnerabilities in Citrix, Windows IIS web server, Microsoft Sharepoint, and Telerik UI.
Where these attacks fail, they are moving to spear-phishing attacks. Spear phishing is most commonly an email or SMS scam targeted towards a specific individual or organisation but can be delivered to a target via any number of electronic communication mediums. In the spear-phishing emails, the attacker attaches files or includes links to a variety of destinations that include:

  • Credential harvesting sites. These genuine-looking but fake web sites prompt targets to enter username and password. Once the gullible target provides the credentials, these are then stored in the attackers’ database and are used to launch credential-based attacks against the organisation’s IT infrastructure and applications.
  • Malicious files. These file attachments to emails look legitimate but once downloaded, they execute a malicious malware on the target device. Common file types are .doc, .docx, .xls, .xlsx, .ppt, .pptx, .jpg, .jpeg, .gif, .mpg, .mp4, .wav
  • OAuth Token Theft. OAuth is commonly used on the internet to authenticate a user to a wide variety of other platforms. This attack technique uses OAuth tokens generated by a platform and shares with other platforms. An example of this is a website that asks users to authenticate using their Facebook or Google accounts in order to use its own services. Faulty implementation of OAuth renders such integration to cyber-attacks.
  • Link Shimming. The technique includes using email tracking services to launch an attack. The attackers send fake emails with valid looking links and images inside, using email tracking services. Once the user receives the email, it tracks the actions related to opening the email and clicking on the links. Such tracking services can reveal when the email was opened, location data, device used, links clicked, and IP addresses used. The links once clicked-on, can in- turn, lead to malicious software being stealthily downloaded on the target system and/or luring the user for credential harvesting.

How do you safeguard against Cyber-Attacks?

The most common vectors for such cyber-attacks are lack of user awareness AND/OR exploitable internet-facing systems and applications. Unpatched or out-of-support internet-facing systems, application or system misconfiguration, inadequate or poorly maintained device security controls and weak threat detection and response programs, compound the threat to your organisation.
Governments across the world are coming up with advisories and guidelines to spread cybersecurity awareness and prevent threats and attacks. ACSC’s Australian Signals Directorates ‘Essential 8’ are effective mitigations for a large majority of present-day attacks. There were also guidelines published earlier this year, specifically with the COVID-19 crisis in mind. The Cyber Security Agency in Singapore (CSA) promotes the ‘Go Safe Online’ campaign that provides regular guidance and best practices on cybersecurity measures.
Ecosystm’s ongoing “Digital Priorities in the New Normal” study evaluates the impact of the COVID-19 pandemic on organisations, and how digital priorities are being initiated or aligned to adapt to the New Normal that has emerged. 41% of organisations in Asia Pacific re-evaluated cybersecurity risks and measures, in the wake of the pandemic. Identity & Access Management (IDAM), Data Security and Threat Analytics & Intelligence saw increased investments in many organisations in the region (Figure 1).Investments in Cybersecurity
However, technology implementation has to be backed by a rigorous process that constantly evaluates the organisation’s risk positions. The following preventive measures will help you address the risks to your organisation:

  • Conduct regular user awareness training on common cyber threats
  • Conduct regular phishing tests to check user awareness level
  • Patch the internet-facing products as recommended by their vendors
  • Establish baseline security standards for applications and systems
  • Apply multi-factor authentication to access critical applications and systems – especially internet-facing and SaaS products widely used in the organisation like O365
  • Follow regular vulnerability scanning and remediation regimes
  • Conduct regular penetration testing on internet-facing applications and systems
  • Apply security settings on endpoints and internet gateways that disallow download and execution of files from unfamiliar sources
  • Maintain an active threat detection and response program that provides for intrusion detection, integrity checks, user and system behaviour monitoring and tools to maintain visibility of potential attacks and incidents – e.g Security Information & Event Monitoring (SIEM) tools
  • Consider managed services such as Managed Threat Detection and Response delivered via security operations (SOC)
  • Maintain a robust incident management program that is reviewed and tested at least annually
  • Maintain a comprehensive backup regime – especially for critical data – including offsite/offline backups, and regular testing of backups for data integrity
  • Restrict and monitor the usage of administrative credentials

 


Get more insights on the adoption of key Cybersecurity solutions and investments through our “Market Insights and Vendor Selection” research module which is live and ongoing on the Ecosystm platform.
Get Started


1
Ecosystm Snapshot: Confidential Computing Consortium formed for data security

5/5 (2)

5/5 (2) A group of some of the world’s largest technology companies including Alibaba, ARM, Baidu, Google Cloud, IBM, Intel, Red Hat, Swisscom, and Tencent have come together to form “Confidential Computing Consortium”, an association established by the Linux Foundation to improve the security of data while in use, and hosted by them at ConfidentialComputing.io.

Confidential Computing Consortium aims to define and accelerate the adoption of Confidential Computing by bringing together a community of open-source technologies and experts.

 

New Cross-Industry Effort

There are many government agencies, consortiums, and software and hardware vendors working on data security so a key question here is “who needs it?”

Commenting on CCC, Claus Mortensen, Principal Advisor at Ecosystm said, “whether this is really ‘needed’ is a matter of perspective though. Many would argue, that a project driven by technology giants is contrary to the grassroots approach of earlier open source projects. However, Confidential Computing is a complex area that involves both hardware and software. Also, the stakes have become high for businesses and consumers alike, as data privacy has become the focal point. Arguably, the big tech companies need this Open Source initiative more than anyone at the moment.”

 

How CCC would benefit enterprises and business users?

With the increasing adoption of the cloud environments, on-premise servers, the edge, IoT and other technologies, it’s crucial for enterprises to increase the security of the data. While there have been approaches to encrypt data at rest (storage) and in transit (network) it’s quite challenging to fully encrypt the data life cycle and that’s what Confidential Computing aims to solve.

Mortensen said, “when it comes to data security, the actual computing part of the data life-cycle is arguably the weakest link and with the recent increased focus on privacy among the public, this ‘weakness’ could possibly be a stumbling block in the further development and update of cloud computing.” Mortensen added, “that doing it as part of an open-source initiative makes sense, not only because the open-source approach is a good development and collaboration environment – but, crucially, it also gives the initiative an air of openness and trustworthiness.”

 

To drive the initiative, members have planned to make a series of open source project contributions to the Confidential Computing Consortium.

  • Intel will contribute Software Guard Extensions (SGX) SDK to the project. This is hardware-based memory level data protection, where data and operations are encrypted in the memory and isolated from the rest of the system resources and software components.
  • Microsoft will be sharing the Open Enclave SDK to develop broader industry collaboration and ensure a truly open development approach. The open-source framework allows developers to build a Trusted Execution Environment (TEE) applications.
  • Red Hat will provide its Enarx to the consortium.  – an open-source project aimed at reducing the number of layers (application, kernel, container engine, bootloader) in a running workload environment.

Mortensen said, “like other open-source initiatives, this would allow businesses to contribute and further develop Confidential Computing. This, in turn, can ensure further uptake and the development of new use cases for the technology.”

 

How Data Security Efforts will shape up

 Confidential Computing is a part of a bigger approach to privacy and data and we may see other possible developments around AI, in distributed computing, and with Big Data analysis.

“Initiatives that can be seen as part of the same bucket include Google’s “Federated Learning” where AI learning on data is distributed privately. This allows Google to apply AI to data on the users’ devices without Google actually seeing the data. The data remains on the user’s device and all that is sent back to Google is the input or learning that the data has provided” said Mortensen.

 

 

Consequently, Confidential Computing seems to ease matters for data security at this point and the collaboration expects the results will lead to greater control and transparency of data for users.

Let us know your opinion on the Confidential Computing Consortium in the comments.

2
Conversations: Cloud, Innovation and Security

5/5 (1)

5/5 (1) In June 2019, Ecosystm conducted Roundtables in Sydney and Melbourne supported by our partner Delphix. IT and business executives from data-intensive industries in Australia came together to discuss how to overcome data-driven constraints and enable innovation by effectively managing and distributing data in a  secure environment within organisations. These are the salient points that emerged from the sessions.

In the course of my facilitation, I realised that the key focus of the discussions was on how organisations today handle their biggest asset – data – and manage all the moving parts in large and diverse settings and in very traditional enterprises that are transitioning into data-driven “New Age” businesses.

The Impossible Tech Triangle

Data has never been so prolific or strategic as it is today. Multiple sources of data generation and technologically savvy customers have seen a data explosion. Simultaneously, personalised services and data insights have seen a drive toward using the data for increased intelligence in most industries. The biggest risk organisations face and what CSOs and business leaders spend sleepless nights on, is significant disruption due to compromise. Moreover, the complexity of the technology platforms means that no organisation is 100% certain that they have it right and that they are managing the risk effectively.

Does this give rise to a similar situation as in Marketing and Advertising where the triangle of price, quality and speed appear to be unattainable by many organisations?

 Impossible Tech Triangle - Ecosystm

 

 

Key takeaways from the sessions

 

#1 Drivers & Challenges of Cloud adoption

The fact that discussions around agility and innovation can happen with the intensity it does today, is because organisations have embraced Cloud infrastructure and application development platforms and SaaS solutions.Every organisation’s Cloud journey is unique, driven by its discrete set of requirements. Organisations choosing cloud may not have the resources to build in-house systems – or may choose to migrate to the cloud for various reasons such as cost, productivity, cross-border collaboration or for compliance.

When embarking on a Cloud journey it is important to have a clear roadmap that involves instilling a Cloud-First culture and training the IT organisation in the right skills for the environment. Concerns around costs, security, and data ownership are still synonymous with Cloud, therefore, organisations can distill the workload from a cost angle before jumping on Cloud. It is important for organisations to appreciate when a Cloud option will not work out from a cost angle and to have the right cost considerations in place, because organisations that do a straight resource swapover on Cloud are likely to end up paying more.

Data ownership and data residency can also be challenging, especially from a compliance standpoint. For some, the biggest challenge is to know the status of their data residency. The challenges are not just around legacy systems but also in terms of defining a data strategy that can deliver the desired outcomes and managing risk effectively without ruining the opportunities and rewards that data utilisation can bring. Cloud transformation projects bring in data from multiple and disparate sources. A clear data strategy should manage the data through its entire lifecycle and consider aspects such as how the data is captured, stored, shared, and governed.

 

#2 Perception on Public Cloud Security

While security remains a key concern when it comes to Cloud adoption, Cloud is often regarded as a more secure option than on-premise. Cloud providers have dedicated security focus, constantly upgrade their security capabilities in response to newer threats and evolve their partner ecosystem. There is also better traceability with Cloud as every virtual activity can be tracked, monitored, and is loggable.

However, the Cloud is as secure as an organisation makes it. The Cloud infrastructure may be secure, but the responsibility of securing applications lies with the organisation. The perception that there is no need to supplement public Cloud security features can have disastrous outcomes. It is important to supplement the Cloud provider’s security with event-driven security measures within an organisation’s applications and cloud infrastructure. As developers increasingly leverage APIs, this need to focus on security, along with functionality and agility should be emphasised on. Organisations should be aware that security is a shared responsibility between the Cloud provider and the organisation.

 

#3 Viewing Security as a Business Risk – not IT Risk

The Executive Management and the Board may be involved in the Security strategy and GRC policies of an organisation. But a consistent challenge Security teams face is convincing the Board and Senior Management on the need for ongoing focus and investments on cybersecurity measures. Often, these investments are isolated from the organisation’s KPIs and are harder to quantify. But Security breaches do have financial and reputational impact on organisations. Mature organisations are beginning to view Security as a business risk requirement and not a matter of IT risk alone. One of the reasons why Senior Management and Boards do not understand the full potential of data breaches is because CISOs do not translate the implications in business terms. It is their responsibility to find ways to procure senior management buy-in, so that Security becomes part of the Strategy and the costs associated gets written into the cost of doing business.

Training sessions that educate the stakeholders on the basics of the risks associated with using knowledge systems can help. Simulation of actual cybersecurity events and scenario testing can bring home the operational issues around recovery, assessment and containment and it is important to involve senior stakeholders in these exercises. However, eventually the role of the CSO will evolve. It will become a business role and traverse Security across the entire organisation – physical as well as cybersecurity. This is when organisations will truly recognise investment in Security as a business requirement.

 

#4 Moving away from Compliance-driven Security Practices

Several organisations look at Security as part of their compliance exercise, and compliance is built into their organisational risk management programmes. Often,  security practices are portrayed as a product differentiator and used as a marketing  tool. An organisation’s Security strategy should be more robust than that and should not only be focused on ticking the right compliance boxes.

A focus on compliance often means that Security teams continually create policies and call out non-compliance rather than proactively contribute to a secure environment. Applications teams do not always have the right skills to manage Security. The focus of the Security team should not be on telling Applications teams  what they are doing wrong and writing copious policies, procedures and standards, expecting others to execute on the recommendations. There should be a focus on automated policy-driven remediation that does not restrict the Applications team per se but focuses on unsafe practices, when they are detected. Their role is to work on the implementation and set up Security practices to help the Applications team do what they do best.

 

#5 Formulating the Right Incident Response Policy

In the Ecosystm Cybersecurity study, 73% of global organisations think that a data breach is inevitable – so organisations largely believe that “it is not about if, but when”. About 50% of global organisations have a cyber insurance policy or are evaluating one. This trend will only rise. Policy-driven incident response measures are an absolute requirement in all enterprises today. However, to a large extent even their incident response policies are compliance driven. 65% of the organisations appear to be satisfied with their current breach handling processes. It is important to keep evolving the process in the face of new threats.

Organisations should also be aware of the need for people management during an incident. Policies might be clear and adhered to, but it is substantially harder to train the stakeholders involved on how they will handle the breach emotionally. It extends to how an organisation manages their welfare both during an incident and long after the incident response has been closed off.

 

Over the two sessions, we explored how to achieve the ‘unattainable triangle’ of Cloud, Agility and Security.  What I found interesting – yet unsurprising – is that discussions were heavily focussed on the role of Security. On the one hand, there is the challenge of the current threat landscape. On the other hand, Security teams are required to deliver a Cloud and an agile development strategy in tandem. This disconnect ultimately highlights the need for Security and data management to be embedded and managed from the very start, and not as an afterthought.

Here are some of the moments from the session –

cloud-innovation-security
cloud-innovation-security
cloud-innovation-security
cloud-innovation-security
cloud-innovation-security
cloud-innovation-security
cloud-innovation-security
Ecosystm Roundtables (5)
Ecosystm Roundtables (4)
Ecosystm Roundtables
Ecosystm Roundtables (7)
Ecosystm Roundtables (6)
Ecosystm Roundtables (3)
Ecosystm Roundtables (2)
previous arrow
next arrow
Ecosystm Roundtables (5)
Ecosystm Roundtables (4)
Ecosystm Roundtables
Ecosystm Roundtables (7)
Ecosystm Roundtables (6)
Ecosystm Roundtables (3)
Ecosystm Roundtables (2)
previous arrow
next arrow

2
Ecosystm Snapshot: Singapore refreshes Personal Data Protection Act framework

5/5 (2)

5/5 (2) At the 7th Personal Data Protection Seminar, Minister for Communications and Information S. Iswaran announced a framework to bolster Singapore’s digital economy and to drive Singapore’s vision of turning the country into a regional data protection hub.

The Personal Data Protection Commission (PDPC), which oversees the country’s Personal Data Protection Act (PDPA), has developed a new framework to better support organisations in the hiring and training requirements of Data Protection Officers.

 

Why the Need?

PDPA has been around for a while and the new framework is brought into practice to enable a greater focus of government and organisations on data privacy. According to Ecosystm’s expert on GDPR and Data Privacy, Claus Mortensen, “the initiative reflects the difference between ‘theory’ and ‘practice’ when it comes to data security. PDPA is not making changes to the present regulatory framework, but they are putting together a program and guidelines for how companies can apply and abide by the present regulatory framework.”

 

Ensuring PDPA framework compliance

The PDPC plans to collaborate with the National Trades Union Congress (NTUC), Employment and Employability Institute (e2i) and NTUC LearningHub to create a year-long pilot training program to train at least 500 data protection officers in its first intake. The Data Protection Officers would help to manage the complex and highly sensitive task of data flows.

To ensure the data flow mechanisms and to ensure security, Infocomm Media Development Authority (IMDA) has been appointed as Singapore’s Accountability Agent (AA) for the Asia-Pacific Economic Cooperation (APEC) Cross Border Privacy Rules (CBPR) and Privacy Recognition for Processors (PRP) Systems certifications. IMDA will allow Singaporean organisations to be certified in APEC CBPR and PRP Systems for accountable data transfers.

 

“The PDPA requires a data protection officer to be appointed in every organisation. This framework is focused on educating and certifying these officers. However, it mostly makes it easier for slightly larger companies who can afford to send employees on longer training programs or who are able to hire people, who have taken the certificates. Smaller organisations – such as start-ups – would benefit more from detailed guidelines and from on-premises guidance. Establishing a framework for such services could be the next area of focus for the PDPC.” said Mortensen.

 

Legislature for the data handling and exchange practices

The private data has become an increased target of hackers as well as an international commodity. Attackers always mine the cyberspace for any leaks or financial information that they can exploit to their advantage.

“Managing sensitive data is notoriously complicated – especially for ‘legacy’ companies that still have or rely on non-digitised data. Even when all data is digital, employees may have copies on their PCs, they may have partial backups on removable media, some data may need to be shared with sub-contractors, moved around between cloud providers, etc. This can make it very difficult to map out PDPA relevant data in the organisation. Even when the data has been mapped, it can be difficult to ensure that all business and data processes are compliant. This is where on-premises guidance can make a difference,” said Mortensen. “While the government clearly aim the new framework initiatives at helping SMEs, it will help further protect consumer’s sensitive data.”

 

Importance of Cyber Security

A business harnessing digital technology can’t afford to gamble with sensitive data and rising cyber-attacks. The government is taking initiatives by forming guidelines and regulations to prevent cyber-attacks, but it is the responsibility of businesses to have a cybersecurity strategy in place to prevent a breach. If a business becomes a victim of hacking, it is perceived as a failure to the company.

The government passed the PDPA law and compliance to ensure that businesses understand the importance of cybersecurity. Therefore, every business, organisation or academic institution must ensure compliance with the data protection act and must have security best practices to safeguard the data of its customers.

1
The GBP 183 million fine that accentuates the false economy of IT cutbacks

5/5 (3)

5/5 (3) It’s been a tough couple of years for British Airways (BA). 2017 saw an IT failure that resulted in 726 flights getting cancelled and 75,000 passengers being stranded on a busy holiday weekend. The fallout from this incident was around GDP 80 million in compensation paid to passengers and an almost immediate 4% drop in the share price.

Then, in 2018, the British carrier fell victim to a security breach in which around 500,000 people had their personal data, including their credit card details, stolen by hackers.

It is this breach that British Airways is now told that it will be fined 183 million British Pounds for. The fine was announced by the British Information Commissioner’s Office (ICO), and if it stands, it will be the biggest fine ever imposed for a data breach in the UK or elsewhere in Europe. In fact, the fine dwarfs previous fines issued up until now.

The biggest fines issued by the ICO in the UK were to Facebook and Equifax – both of whom were fined GBP 500,000.

Facebook’s fine was for the notorious Cambridge Analytica data scandal, where the information of 87 million Facebook users was shared with the political consultancy through a quiz app that collected data from participants as well as their friends without their consent.

Equifax Ltd. fine was for something more similar to the British Airways case: In May 2017, hackers stole personal data including names, dates of birth, addresses, passwords, driving licences and financial details of 15 million UK customers. In its ruling, the ICO said that Equifax had failed to take appropriate steps to ensure the protection of this sensitive data despite warnings from the US government.

But these fines were all from before the General Data Protection Regulation (GDPR) came into effect. Now, under the new rules, fines can be as high as EUR 10,000,000 or 2% of total global annual turnover for the previous year (whichever is higher) for lesser data breach incidents. For significant data breaches and non-compliance, the fines can be double that: EUR 20,000,000 or 4% of total global annual turnover (whichever is higher).

British Airways’ GBP 183 million fine is the equivalent of 1.5% of its turnover in 2017. Had the ICO gone for the maximum limit, the fine could have been as much as GBP 489 million.

A lot can still happen before the fine is finally issued, and BA is likely to dispute the decision in court (Willie Walsh, the CEO of BA’s parent company, IAG, has said they will). But even if the fine ends up being significantly lower, there are obvious lessons to be learned from this case:

 

  1. “People’s personal data is just that—personal.” These were the words spoken by Elizabeth Denham, the ICO Information Commissioner, in response to media enquiries on the fine. In other words: companies will need to take data privacy extremely seriously from now on or expect very hefty fines.

 

  1. Attitude matters. British Airways chairman and chief executive, Alex Cruz, said in a statement that BA was “…surprised and disappointed in this initial finding from the ICO. British Airways responded quickly to a criminal act to steal customers’ data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologize to our customers for any inconvenience this event caused.”

Although we can’t know this for certain, the response may reflect what could be described as an “attitude problem” in how BA has been dealing with the ICO: a whiff of arrogance, blaming the breach on criminal hackers and failing to accept any blame or real responsibility for the incident.

We know from other GDPR cases in other countries that any failure to cooperate with the authorities may result in larger fines. Full transparency, full cooperation and accepting responsibility are the way to go. If it’s your data, then the buck stops with you.

  1. The risks associated with IT cutbacks just went through the roof. The operating losses following the financial crises of 2008 made the carrier slash back its IT budgets (as well as other “expenditures”). Airlines, in general, are notorious for under-spending on IT, but when combining that with further cutbacks on IT expenditure, disaster may ensue. BA’s recent IT related woes may or may not be a direct result of under-spending on IT, but in the court of public opinion, this connection has been made.

 

In any case, with the new fine regime under the GDPR, the risks associated with under-spending on IT – and on IT security in particular – have now gotten substantially bigger.

More than ever, the notion that IT is an expenditure that can be cut back on is a false economy.

4