It’s not just the security and integrity of these new systems that is concerning, but the validity of the algorithm behind many new AI applications that is scary, the presenters argued in their closing keynote at the HIMSS Cybersecurity Forum in Las Vegas.
AI Integrity Concerns
Algorithms are trained using massive lakes of medical data, but the integrity of that data is becoming a growing concern. Was the data poisoned by fake or misleading academic and medical research? Is it based upon false premise and unproven assumptions?
There is a double-edged sword of AI emerging, where large language models (LLMs) and generative pre-trained transformers (GPT) known as Generative AI, are being used to generate research that is then being sucked into AI training data for the development of medical AI applications. Indeed, nearly ten percent of cancer papers have been flagged as potentially fake according to a recent report by The Scientist. If new clinical cancer treatments are based upon LLM generated research and false premise what does that do to cancer success rates and patient outcomes?
There is a latent realization and an emerging body of knowledge on this growing concern in fact. The South China Morning Post highlighted growing AI powered academic fraud, accusing Chinese paper mills of mass producing fake academic research. Even China’s supreme court acknowledged the problem and called for crack down on paper mills.
While some of these fake research papers are being identified, it’s important to note that papers from Chinese institutions account for more than half of all retractions across 10 major academic publishers. China is not alone however and AI generated research is growing at alarming rates as academics cut corners with papers to keep up appearances while companies generate misleading or fake data to support investment money.
Growing Cybersecurity Concerns
Much of cybersecurity regulation and compliance has historically, and rather myopically, been focused on the protection of confidentiality through legacy rules, many of which were written a generation ago when privacy meant something very different than it does today. More recently cybersecurity has had to focus upon systems and data availability thanks to a deluge of ransomware attacks against victims. Data poisoning, adversarial machine learning and a heap of other AI attack methods and vectors are now forcing cybersecurity professionals to focus upon the remaining leg of the CIA triad, 'integrity'.The CIA triad of 'confidentiality, integrity and availability' forms the the foundation of cybersecurity in both regulatory compliance and risk management approach. It is perhaps ironic that the arrival of such an advanced technology like AI could be accompanied by unintentional, or perhaps intentional, attacks against the training data underpinning and used to develop that very technology.
As AI deployments grow in size and complexity, so too will the number and variety of cyber attacks directed towards these platforms. New Offensive AI and Defensive AI technologies will soon be squaring off against each other infiltrating networks undetected and searching for vulnerabilities in code to exploit, then chaining those vulnerabilities in execution groups, while defensive tools are trained to identify and AI based attacks and neutralize threats automatically without human intervention. Tomorrow is as they say 'just another day' and its likely that we will be living a day at a time from here on in.














