Twitter
Facebook
YouTube
LinkedIn
RSS

Panel 409 - Artificial Intelligence: Building Resilience Against Cyber Threats

Conference Day: 
Day 3 - November 15th 2019
Takeaways and recommendations: 

Artificial Intelligence: Building Resilience Against Cyber Threats

Organized by: Simon Fraser University

Speakers: Zalina Gappoeva, Principal Security Architect, Cyber Security Solution Architecture; Dominic Vogel, Founder & Chief Strategist, Cyber.sc; Zahra Zohrevand, Senior Member of Technical Staff, Oracle Labs

Moderator: Uwe Glässer, Professor, Computing Science, Simon Fraser University

Takeaways:

  1. The number of cyber incidents is increasing globally and the finance and insurance sectors are the biggest targets. 

  2. The financial system is particularly vulnerable due to its ever-increasing global interconnectivity. 

  3. Artificial Intelligence (AI) and Machine Learning (ML) are not inherently positive, nor are they better than traditional security approaches or smarter than humans, but they are tools that can supplement and augment our current way of doing cybersecurity.

  4. Human analysts will continue to be essential to cybersecurity, but AI must also be used in cybersecurity applications as attackers will use it: Only AI can compete with AI. 

  5. To generate marketing hype, some security organizations are adding AI and ML to their products in ways that are not useful and may actually be inferior to previous, non-AI iterations.

Actions:

  1. Any organization that uses a cybersecurity vendor who recently added AI to their products should ensure the AI is actually adding value.

  2. Startups and small and medium businesses must consider and build in cybersecurity from when they are founded, as well as at the beginning of any new product development. 

  3. All levels of government need to proactively develop security initiatives (not wait until after they are attacked).

  4. Larger governmental agencies may need to step in to support smaller municipalities who do not have funding to develop these systems. 

  5. AI security systems have to be explainable to everyone who relies on them, so they can trust and understand how to react to the outcomes.