AI Global 01-07-2025 Mandula Moments: Risks and opportunities in an AI-driven world (Part 2) Continuation from Part 1 Risks and Threats in a Data-Saturated World The other side of the data boom is an array of growing risks. As data volumes and AI capabilities soar, concerns about privacy, security, and ethical use of data are mounting. If not properly managed, the data deluge could lead to serious harm for individuals, organizations, and society. Key risks and challenges include: Data Privacy and Surveillance: With record amounts of personal data being collected – from online activities, smartphones, IoT sensors, and more – individuals face an erosion of privacy. Much of this data is gathered often without explicit consent or clear understanding by users. People routinely “trade” their personal data for the convenience of free apps and services, usually without monetary benefit or full awareness of how their information will be used. This raises the question: who owns personal data, and who has the right to control its use? Currently, corporations that collect and store user data often assume de facto ownership and can profit from it (for example, by selling detailed profiles to advertisers), while individuals have little say. This imbalance has spurred policy responses like the EU’s General Data Protection Regulation (GDPR), which asserts that individuals own personal data about themselves and grants rights to consent, access, and deletion. Even so, the temptation for companies and governments to exploit data for surveillance is high. There is a real risk of a “Big Brother” scenario where ubiquitous data collection enables constant monitoring of citizens’ behaviors. Striking the balance between leveraging data for innovation and protecting fundamental privacy rights is a core challenge of our era. Cybersecurity and Data Breaches: The more data that is collected and stored, the more attractive a target it becomes for cyber criminals. Recent years have seen an epidemic of data breaches, hacks, and ransomware attacks affecting businesses, governments, and millions of individuals. In 2023, the number of reported data compromises hit an all-time high – over 3,200 breaches in the U.S. alone, impacting 353 million individuals (some people counted multiple times if hit by several breaches). Globally, billions of personal records are exposed each year through hacking incidents. Cybercrime has become a lucrative industry, costing the world an estimated $600 billion annually (nearly 1% of global GDP) when you factor in stolen funds, fraud, ransomware payouts, and the business losses from disruptions. Sophisticated hackers, including state-sponsored groups, are constantly probing for vulnerabilities to steal sensitive information – whether financial records, health data, or intellectual property. The risk of data theft is thus omnipresent. “Who will steal it?” Unfortunately, the answer is that numerous actors are attempting to steal valuable data at any given moment – from lone hackers and organized cybercrime gangs to insider threats and hostile nation-states. High-profile attacks (such as those exploiting software supply chains like the 2023 breach) have compromised hundreds of companies at once. These incidents erode public trust and can cause severe financial and reputational damage to organisations. They also illustrate how data security must be a top priority in the age of big data. Robust cybersecurity measures, encryption, and incident response plans are essential to fend off data thieves. Additionally, as more critical infrastructure goes digital, cyberattacks pose not only a privacy risk but a national security threat (consider attacks on power grids, hospitals, or financial systems). The challenge for business and government leaders is to protect an ever-expanding attack surface as data proliferates. Bias and Discrimination: The rise of big data and AI carries the risk of amplifying social biases and inequities if not carefully managed. AI algorithms learn from historical data – and if those data reflect societal biases or skewed samples, the algorithms can reproduce or even reinforce unfair discrimination. Examples have already been documented in domains such as lending, hiring, and criminal justice, where AI systems exhibited bias against certain racial or demographic groups due to biased training data. As researchers note, algorithms can only be as fair as the data fed into them, and biased datasets will yield biased results. This is particularly concerning in high-stakes applications like healthcare and finance, where inequities in the data (e.g., underrepresentation of minority populations in clinical trials or credit histories) can lead to unequal outcomes. Without deliberate efforts to audit and correct bias, big data might perpetuate or even worsen existing disparities. Those harmed by such biases are often already-marginalized groups – reinforcing a cycle of disadvantage. Moreover, lack of transparency in AI (“black box” algorithms) can make it difficult to detect and address these issues. Ensuring ethical AI and fair data practices is thus a critical risk area as data use grows. This may involve techniques to debias datasets, stricter regulation of algorithmic decisions, and inclusion of diverse perspectives in AI development. If done right, AI can be a force for greater fairness (for instance, by highlighting and correcting human biases), but if done poorly, it could entrench inequality at scale. Information Security and Misinformation: An adjacent risk in the era of massive data is the spread of misinformation and manipulation. With so much data (and “fake data”) circulating, it has become easier for malicious actors to generate and spread false information. AI can create realistic fake images, videos (deepfakes), and news at scale, which can be used to mislead the public or sway opinions. Social media platforms, powered by algorithms processing vast user data, can inadvertently amplify false or sensational content, undermining informed public discourse. Experts have warned that the commodification of personal data and online experience enables targeted propaganda and disinformation campaigns that can destabilize societies. While misinformation is not solely a data volume problem, the sheer scale and speed of data flows make it harder to distinguish truth from falsehood. This poses a risk of social harm, from public health misinformation to election interference. Combating this will require new forms of content verification, digital literacy efforts, and possibly tighter oversight of how data-driven platforms operate. To be continued in Part 3. #AI#cybersecurity#data privacy#fraud#Mark Mandula