@mrosenquist, one way would be to implement a machine learning algorithm for anomaly detection. It would learn normal usage on your system/network via metrics of various resources (e.g., percentage of CPU used, number of processes/threads, number of file sockets open, number of database connections open). Then, if malware does make it past your firewall, your AI would figure out that there's "unusual behavior" (e.g., burst activity like sending out an unusually large number of emails--typical when the malware turns your system into a bot). For more on the subject, please see https://en.wikipedia.org/wiki/Anomaly_detection.
You are viewing a single comment's thread from:
@terenceplizga I definitely follow you know. Do you write about this topic area as well?
Thanks, Martin. Yes ... just started blogging on machine learning recently. I've been doing research in the area for a while now, but I'm new to the whole blogging thing.
I know several companies working to apply AI for anomaly detection in this way. But such detection techniques have been attempted for over a decade with little success. I have not seen any indication the application of AI will push the success rate much further. The problem is with overcoming the cross over rate of false positive and false-negatives. Then there is the longer term risk of attackers figuring out how the AI makes its decision and then maneuvering to poison the learning or simply working beyond the learned bounds.