Should we be worried

About state-sponsored attacks against hospitals?

Security and the Board Need to Speak the Same Language

How security leaders speak to thier C-Suite and Board can make all the difference

The Rising Threat of Offensive AI

Can we trust what we see, hear and are told?

Who'd want to be a CISO?

Challenging job, but increasingly well paid

Medical Tourism - Growing in Popularity

Safe, fun, and much, MUCH more cost-effecitive

The Changing Face of the Security Leader

The role is changing, but what does the future hold?

Cyber Risk Insurance Won't Save Your Reputation

Be careful what you purchase and for what reason

2023 Predictions

2023 predictions
As 2022 draws to a close, what can we learn from a year marked by Russia's invasion of Ukraine, crippling cyber and kinetic attacks against critical infrastructure not just in Ukraine but across the world, and a continued rise in cyber attacks and ransomware globally? A year in which Russia, China and Iran have all become victims of cyber attacks, perhaps reaping the seeds sown by each of them in the past. And a year which saw the costs of cyber-crime move well above its $6 trillion 2021 levels even though the year is not over yet and the full costs counted.

Can and should we extrapolate trends identified over the past year and claim that these trends will continue in their upward path, or is the cyber threat landscape more complicated than we generally assume it to be.

With both Russia and China, the two greatest perpetrators of cyber-crime, increasingly isolated from the rest of the world, and with growing domestic dissent in China, Iran and Russia, are geopolitical moves against autocrats likely to change the world's three most egregious offensive cyber actors?

2022

2022 - A Year in Reflection

In 2022 we saw a massive collapse and re-alignment of organized crime groups following the Russian invasion of Ukraine in February. Prior to the war, these groups consisting of perpetrators located right across the Commonwealth of Independent States (CIS) were united predominantly by their use of the Russian language. During the invasion, Ukrainian and other non-Russian members pulled out of many of these Russian led groups, and some even turned on their former gangs exposing their inner most secrets and the identities of leaders. This break up caused a dip in attacks in March and April and was further hampered by many global ISPs withdrawing from business in Russia. The result was a dramatic reduction in the Internet bandwidth into Russia for many of these groups to use.

Since the onset of war, many of the leaders of these crime gangs, who operate under the eye of the Russian Mafia, who in turn operate with impunity under the oligarchs and ultimately the Kremlin, have quit the profession, scared that Russia will collapse along with Putin’s protective umbrella. Many are worried that they might be identified, caught, and prosecuted. Most have taken their millions in ill-gotten gains and ran, going deep underground. This has left a power vacuum in Russian organized crime syndicates where young, fearless, and ruthless new leaders have taken over. This has led to reckless attacks including the targeting of healthcare providers. A ‘live today die tomorrow’, ‘get rich quick’ mentality now persists as many of those involved are scared of being conscripted by the Russian Army and being sent off to die in Ukraine. Some of these cybercriminals have even acted upon their disdain for the Putin dictatorship, by launching cyberattacks against the Kremlin itself, a very risky proposition indeed.

At the same time, the affiliates of many of these ransomware-as-a-service (RaaS) groups have gone rogue, distancing themselves from Russia and from RaaS providers. With re-alignment complete, the gloves have been taken off and affiliates are hunting freely by themselves and are prepared to take much higher risks than previously allowed. Again, this includes the targeting of healthcare and other national critical infrastructure industries.

Unsurprisingly this has piqued the attention of the FBI, Homeland Security, and other law enforcement groups. Its also one of the main reasons behind the recent FBI warning about one of these groups in particular, Daixin. This group is widely accredited with the September / October ransomware attack against Common Spirit Health, the second largest US healthcare provider. The attack impacted hundreds of provider facilities across most US states, denying timely care to millions of US citizens.

If we thought that the threat landscape was bad in 2021, 2022 has turned into the wild west with rogue gun-slingers on every corner and dead bodies mounting up on every street! For an easy target like healthcare, prospects don’t look good. With its collection of out-of-date weapons, no money to buy new tools, and very small ill-equipped teams, it stands almost no chance defending against an increasingly out-of-control and rabid gang of adversaries.

But the Russian and other CIS gangs aren’t the only things that healthcare needs to be concerned about. Increased offensive activity against providers has been seen coming from both China and Iran. With Iran recently appearing to side with Putinist forces. With threats of further sanctions from Europe and the USA, and rising internal revolt against the theocratic dictatorship that runs the country, Iranian forces are on the offensive. So too is China, and now that Xi has unchecked power over the CCP and the country for life, it is likely that China’s massive PLA cyber army will launch new offensives against western critical infrastructure providers, as China increasingly uses cyber weaponry against its perceived enemies.

Any healthcare CEOs that still have their heads buried in the sand, thinking that a cyberattack is unlikely to impact their hospitals, had better find a deep cave in which to hide, because the noise of collapse in 2023 will be omnipresent.

"We are seeing 2 to 3 ransomware attacks against US healthcare providers each and every day at the moment,” claimed Richard Staynings, Cylera's Chief Security Strategist in a recent interview. “That is not about to go down any time soon, so long as hospital boards and CEOs keep paying the ransoms. Instead of paying the criminals holding them to extortion, they need to invest properly in security and IT which is totally underfunded. This is especially so if you analyze the risks or compare the healthcare industry with other industries such as financial services. It’s somewhat analogous to crime victims paying protection money to the mafia, while refusing the properly fund the police or the FBI" he added.

Putting lipstick on a pig


"I wish that I had a more positive prediction for 2023 but that would be putting lipstick on the pig" claimed Staynings.

Are we doing a better job today of defending against attacks than we were a few years ago? Many cybersecurity leaders would say that we are but that the goal posts have moved. Some health systems have prioritized cybersecurity, but most have a long way to go. And that comes back to governance, leadership, and the prioritization of cybersecurity. Most cybersecurity leaders would agree that it's not where it needs to be right now.

Nor unfortunately is the level of cyber protection being provided by Homeland Security, the FBI and others. Governments are never quick to act but plainly, expecting small critical access facilities to protect themselves against highly sophisticated nation-state actors and organized crime syndicates is ridiculous.

As Staynings puts it, "it’s not even analogous to David and Goliath. It’s more akin to a lone Maasai warrior armed with a spear going up against an entire regiment armed with machine guns. The Maasai warrior stands almost no change at all!"

The rising threat of Offensive AI



Various forms of artificial intelligence (AI) look set to transform medicine and the delivery of healthcare services as more and more potential uses are recognized, while adoption rates for AI continue to climb.

Machine Learning (ML) has revolutionized clinical decision support over the past decade, as has AI enhancement of radiological images allowing the use of safer low-dose radiation scans. But AI no matter in which form, requires massive amounts of data for modelling, training, and for mining. While much of that data is de-identified, some cannot be, as training can sometimes require the aggregation of each patient's set of medical tests and records, thus the patient must be known to the AI, in order for it to learn and model correctly.

But healthcare data is valuable. Its valuable to hackers who can ransom it back to data custodians or sell that PII and PHI data on the darknet. Its valuable to nation states such as China for its own data modelling and AI training. Furthermore, medical data is highly regulated and so is subject to fines, punitive damages, restitution and corrective action plans when breached.

AI models are highly valuable and are now the modern day equivalent of the 1960s' 'race to the moon' between the US and USSR. Only the competitors today are the USA and the PRC. Consequently, China has been very aggressive in 'acquiring' whatever research it can to jump-start or enhance its own AI development programs. This has included insider theft by visiting professors and foreign students at western universities, and targeted cyberattacks from the outside. China's five year plan is to surpass the west in its AI capabilities - not just for medical applications but also for military-defence. So for both countries and others, AI is a strategic imperative. It is perhaps ironic that AI is now being used to bypass network defenses to steal .... AI training data among other things as will be explained shortly.

AI in Cybercrime

AI may become the future weapon of choice for cybercriminals. Its unique abilities to mutate as it learns about its environment and masquerade as a valid user and legitimate network traffic allows malware to go undetected across the network bypassing all of our existing cyber defensive tools. Even the best NIDS, AMP and XDR tools are rendered impotent by AI's stealthiness.

AI can be particularly adept when used in phishing attempts. AI understands context and can insert itself into existing email threads. By employing natural language processing to use similar language and writing style to users in a thread, it can trick other users into opening malware-laden attachments or to click on malicious links. Unless an organization has sandboxing in place for attachments and links to external websites, then AI based phishing will have a high margin of success. But things don't stop there.

Offensive AI has been used to weaponize existing malware Trojans. This includes the Emotet banking Trojan which was recently AI enabled. It can self-propagate to spread laterally across a network, and contains a password list to brute force its way into systems as it goes. Its highly extensible framework can be used for new modules for even more nefarious purposes including ransomware and other availability attacks. Regulation requires providers to protect the confidentiality, integrity and availability of protected health information and systems, but in healthcare availability is everything. When health IT and IoT systems go down so does a provider's ability to render care to patients in today's highly digital health system. This digital industry is now dependent upon its IT and IoT systems.


Offensive AI can also be used to execute integrity based attacks against healthcare. This is where the danger really lies. AI blends into the background and uses APT techniques to learn the dominant communication channels seamlessly merging in with routine activity to disguise itself amid the noise. AI can change medical records, altering diagnoses, changing blood types, or removing patient allergies, all without raising alarm.

It's one thing for physicians not to have access to medical records, but to have access to medical records which have had their data maliciously altered is another. It's also far more dangerous if the wrong treatment is then prescribed based upon that bad data. This becomes a major clinical risk and patient safety issue. It also denudes trust in the HIT and HIoT systems that clinicians rely upon, and may eventually lead to physicians questioning the data in front of them or having to second guess that information.
  • Can I trust a medical record?
  • Can I trust a medical device?
It also raises some major questions around medical liability.

A research study in 2019 at Ben-Gurion University of the Negev was able to compromise the integrity of a radiological image by inserting fake nodules into an image between the CT scanner and the PACS systems or by removing real nodules from a CT image by using Deep Learning (DL). The research wasn't purely theoretical either, but used a blind study to prove its thesis that radiologists could be fooled by AI altered images.


The study was able to trick three skilled radiologists into misdiagnosing conditions nearly every time using real CT lung scans, 70 of which were altered by their malware. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

The implication of such a powerful tool if used maliciously is obviously huge, resulting in cancers remaining undiagnosed or patients being needlessly misled and perhaps operated on.

In the run up to the 2016 presidential election a hoarse sounding Hillary Clinton decided to share a recent CT image with the media to prove that she was suffering from pneumonia rather than long term health concerns such as cancer. Had her chest image been altered to indicate cancer its likely that she would have been forced to withdraw from the election. Either way, a American Presidential election could have been compromised and AI potentially used to alter its outcome. AI could thus become a powerful weapon for nefarious nation states to undermine democracy and wishing to destabilize a country. The same tools could also be used by radical domestic groups one day to change the outcome of an election.

Deepfakes

The rising capabilities and use of Deepfakes for Business Email Compromise (BEC) whether using audio or video, will render humans unable to differentiate between true and false, real and fake, legitimate and illegitimate. "Was that really the CEO I just had an interactive phone conversation with telling me to wire money overseas?"


But Deepfakes could be very dangerous from a national security perspective also, domestically and internationally. "Did the President really say that on TV?"

Compared to Ronald Reagan's 1984 hot mike gaffe about bombing Russia, a deepfake might be much more convincing and concerning as the majority of people would likely believe what they saw and heard. After all, much of the US population believe what they read on social media or on news sites that constantly fail fact-checking. But the US population is not alone in being easily led as we have seen in Russia, where most of the Russian population has been found to believe the state propaganda presented on TV about Putin's war against Nazis in Ukraine.

Cognitively, we are not prepared for deepfakes and are not preconditioned to critically evaluate what we see and hear in the same way that we may challenge a photo in a magazine that may have been photoshopped. AI obviously has massive and as-yet untapped PhyOps (psychological operations) capabilities for the CIA, FSB, MSS, and other clandestine agencies.

As these and other 'Offensive AI' tools develop and become more widespread, it is likely that cybersecurity practitioners will need to pivot towards greater use of 'Defensive AI' tools. Tools that can recognize an AI based attack and move quickly (far quicker than a human) to block such an attack. Indeed, it is likely that future AI-powered assaults will far outpace human response teams and that almost nano-second responses will be needed to prevent the almost pandemic spread of malware across the network.

According to Forrester's "Using AI for Evil' report, "Mainstream AI-powered hacking is just a matter of time."