Connecticut CPA Magazine Excerpt: AI Can Be a Weapon for Hackers. What Businesses Should Do.
October 24, 2024
By Matt Miller, Principal – KPMG LLP. Reprinted with permission of The Georgia Society of CPAs.
Artificial Intelligence (AI) has transformed how we live and work, offering immense potential for innovation and progress. However, as with any technology, AI also has its drawbacks.
Emerging technologies like deepfakes – AI-generated synthetic media that can convincingly manipulate or fabricate audio, video, and images – have rapidly gained popularity among cybercriminals as a potent tool for cyberattacks. By leveraging deepfakes, they can easily manipulate information, deceive individuals, and exploit vulnerabilities within organizations. The consequences of these attacks can be severe, ranging from financial losses to reputational damage.
As technology continues to advance rapidly, both the opportunities and risks in detection and defense against deepfakes are expanding. Many businesses now have access to the technology and can potentially use it to defend themselves against attacks. However, implementing these tools can be challenging due to external regulations and internal barriers such as skill gaps within the workforce and financial constraints, giving an advantage to malicious actors who may exploit the opportunity first.
A May 2024 KPMG survey found that 76 percent of security leaders are concerned about the increasing sophistication of new cyber threats and attacks. Hackers have found various ingenious ways to use deepfakes as part of their existing cyberattack strategies to make them more credible, such as Business Email Compromise (BEC) scams, insider threats, and market manipulation.
BEC scams involve impersonating high-ranking executives or business partners to deceive employees into transferring funds or sharing sensitive information, while phishing attacks trick individuals into revealing sensitive information. Deepfakes make these scams even more convincing, as hackers can manipulate audio or video to mimic the voice and appearance of the targeted individual. This increases the likelihood of victims falling for the scam, leading to data breaches, financial fraud, and identity theft.
Meanwhile, as far as insider threats are concerned, deepfakes can be used to create fake videos or audio recordings of employees, which can then be used to blackmail or manipulate them. Hackers exploit these deepfakes to gain unauthorized access to sensitive information or compromise the integrity of a business or financial entity. Insider threats pose a significant risk, as they can cause substantial financial and reputational damage.
Deepfakes can also spread false information or manipulate stock prices, resulting in financial gains for hackers. By creating realistic videos or audio recordings of influential figures, hackers can create panic or generate hype, causing significant fluctuations in the market. This can lead to investors making uninformed decisions and suffering financial losses.
As the threat continues to rise, acquiring the necessary funding to detect advanced deepfake technology – which requires maintaining the necessary computing power, forensic algorithms, and audit processes – has been a major challenge.
Additionally, while businesses look to implement effective countermeasures, the rate of digital transformation only continues to pick up. This speed can come at the expense of security. As organizations innovate and embrace digital acceleration, the attack surface expands, and the number of assets requiring advanced security increases, putting them at risk.
This is why it is crucial for CISOs to have conversations with senior decision-makers to ensure cybersecurity budgets account for the costs associated with implementing new processes, tools, and strategies if they want to protect their organizations from deepfake-related malicious attacks. Improving organizational risk intelligence can help build a stronger argument to get the necessary funding by quantifying the financial impact of security risks and threats manipulated content poses.
Once sufficient funding has been acquired, KPMG recommends taking several measures to address the cybersecurity threat posed by deepfakes. First, developing a strong cybersecurity culture and promoting good hygiene practices among employees is important. This includes educating employees about deepfakes, their potential risks, and how to identify them.
For example, by training employees to be cautious when interacting with media content, businesses can reduce the likelihood of falling victim to deepfake attacks. Implementing robust authentication measures to ensure that only authorized individuals have access to sensitive information or systems is critical. This can involve using multi-factor authentication and biometrics to strengthen security.
Leveraging a zero-trust approach can also offer several benefits for mitigating attacks. It provides a comprehensive framework for mitigating deepfake cyberattacks by prioritizing strong authentication, access control, continuous monitoring, segmentation, and data protection.
Organizations can implement granular access controls, restricting access to specific resources based on user roles, privileges, and other contextual factors. Doing so helps prevent unauthorized users from gaining access to critical systems and data that could be used to propagate deepfakes.
Furthermore, zero trust encourages continuous monitoring of user behavior and network activity and promotes network segmentation and isolation. By actively monitoring for suspicious behavior or anomalies, organizations can detect and respond to potential attacks in real time, minimizing the damage caused.
By separating critical systems and data from less secure areas, organizations can limit the spread of deepfake content and prevent it from infiltrating sensitive areas. Lastly, it protects data at all stages, including in transit and at rest. By implementing strong encryption and data protection measures, organizations can safeguard their data from being manipulated or tampered with to create deepfakes.
In addition, businesses should proactively employ advanced monitoring and detection technologies like AI-based tools and algorithms to identify anomalies in audio, video, or image files that may indicate the presence of deepfakes. In fact, according to the recent KPMG survey, 50 percent of cybersecurity leaders already use AI and advanced analytics to predict potential threats.
Other proactive measures can include collaborating and sharing information with regulatory agencies to leverage their expertise and resources critical for developing effective policies, which can ultimately help safeguard critical systems against emerging threats.
Leaders should also prioritize developing an incident response plan that specifically outlines the steps to be taken if a deepfake attack occurs, including communication protocols, legal considerations, and technical countermeasures.
Lastly, regular organization-wide system updates and patching are crucial to maintaining a strong defense against deepfakes. Keeping all software, applications, and systems updated with the latest security patches helps protect against known vulnerabilities that cyber criminals could exploit.
Overall, the rise of new technology, such as deepfakes, presents a significant threat to businesses and financial entities. The scale at which deepfake attacks can cause financial harm, reputational damage, and data breaches should not be underestimated.
As AI technology continues to advance, so will the capabilities of deepfakes. Hackers will undoubtedly find new and innovative ways to exploit this technology for malicious purposes.
Businesses and financial entities need to be vigilant by collaborating with cybersecurity experts, researchers, and law enforcement agencies to stay updated on the latest deepfake techniques and countermeasures.
In this ever-evolving landscape of AI and cybersecurity, it is essential to remain proactive and adaptive. By staying informed, implementing best practices, and leveraging the power of AI for defense, businesses and financial entities can mitigate the risks posed by deepfake attacks and safeguard their operations, reputation, and stakeholders’ trust.
Matt Miller is a principal in the New York office of KPMG LLP’s Advisory Services practice and is the U.S. cyber security services banking industry lead. With 20+ years of experience, Matt’s focus areas include insider threat and internal fraud, third-party risk, quantitative and qualitative risk assessment, and incident
management.
In addition to managing programs and advising clients, Matt has published and presented on many subjects, including leveraging capability maturity models to improve risk management, addressing vulnerability in technologies and critical business applications, and establishing governance and metrics to enable effective risk management programs.