Fraud prevention is another big growth industry in the analytics sector, with cumulative data being used to build up much more detailed pictures of clients and customers, thereby making it easier to detect if they do something out of the ordinary. It doesn't have to be as obvious as someone using their bank card to buy something strange, though understandably many organizations prefer to remain hushed on the specific metrics they're looking for.
Take the example of Featurespace, which is among the companies looking to take things a step further, by not necessarily even specifying those monitoring points. Its new adaptive learning, fraud monitoring system is currently being implemented by UK mobile payment company Zapp, and will be used in conjunction with security systems from partner banks and other financial institutions to detect if someone other than the account holder is trying to access their funds.
The algorithms Featurespace created run in real time and spot minor changes on individual accounts, which it says will allow it to detect existing fraud techniques and possibly ones that have yet to be devised.
That latter part is possible because the algorithms learn as they go. As customers use their accounts, it will build up even deeper profiles and become more aware of their nuances and subtleties, reducing the number of false positives and making it more likely to catch anyone that attempts to use that account for fraudulent activities.
This sort of detective software is incredibly important right now, as we continue to discuss the merits of traditional security measures like passwords, and look to possibly new ones like Windows Hello's face detection. As hackers and nefarious individuals become better at figuring out our secret questions, deciphering our passwords, and social engineering their way past support staff, perhaps these sorts of analytical systems could be put in place as a third factor in our security.
Should those looking to steal our money, account details or entire identities actually figure out our login information, perhaps the cumulative red flags about how they operate via our stolen account could trigger a shutdown. Sure a few false positives would appear the next time we logged into Netflix after a few glasses of wine, but perhaps that would be worth it for better security.
It could even go a step further and factor in some biometric data from a wearable device as well. Not only do I need to act like me to get in, but my heart needs to sound like mine too.
Or we could utilize it as a way to ditch security information altogether. Perhaps anyone can login to my Skype account, but if they don't operate it in the way I tend to do, it locks them out.
That would certainly cut down on the number of passwords I need to remember.