Charlie Roberts, head of business development, UK, Ireland & EU at IDnow, discusses how combining AI and humans can help to tackle cyber fraud With AI unable to combat cyber fraud alone, a hybrid approach is the way forward.
Cyber fraud is a global threat which experts identify as the world’s fastest growing and most dangerous economic crime. It isn’t hard to see why, with current predictions suggesting that by 2021, the damage caused by internet fraud will reach $6 trillion. Artificial intelligence (AI) is recognised as a key technological driver in the identity verification market, one which many believe will help tackle the issue of cyber fraud. However, while AI is becoming increasingly intelligent, so too are the fraudsters trying to defy the protections it aims to provide.
Telling tales: using behavioural AI to reconstruct attack storylines
Now is the perfect opportunity for criminals to exploit the security protocols of many online operations, as Covid-19 has forced an accelerated move to digital. As a result, new account fraud is becoming a major issue, with recent figures from Action Fraud reporting £16.6 million losses resulting from online shopping fraud since 23 March.
Where does AI fit?
From an identity management perspective, AI plays a vital role in the verification process; its ability to recognise and classify documents by reading complex security features, such as holograms and microtext, often results in strong and reliable authentication. Furthermore, major advances in biometrics are providing heightened levels of fraud prevention standards. For example, Liveness Detection, which is an AI computer system’s ability to determine that it is interfacing with a physically present human being and not an inanimate spoof artifact, is helping to stop fraudsters using stolen photos, deep fake videos or masks in order to access or create online accounts. Liveness Detection can recognise and confirm, in less than two seconds, that it is a live person’s face – even advanced masks, imposters, lookalikes and doppelgangers can be spotted with a high degree of accuracy.
To measure these biometric systems, a False Acceptance Rate (FAR) is considered critical. This is a specific key performance indicator that measures false acceptances with a biometric system. It tracks and evaluates the precision of a biometric system to determine the rate at which unauthorised users are verified on the system. Current regulations require biometric systems for governmental use to have an FAR of below 0.1%. AI-powered solutions are able to outperform even these incredibly high requirements.
How much do behavioural biometrics improve cyber security?
Experts often consider biometrics security the next big thing in cyber security. It encompasses a broad category that includes verifying a person’s fingerprint, iris, gait and other factors that should be unique to the person checked. However, various tests proved that some biometric-based security has substantial room for improvement. Read here
While AI offers unrivalled levels of security against identity fraud, fraudsters have become increasingly adept at forging and faking even realistic holograms, to bypass AI and machine learning technology. This is where taking a hybrid approach becomes vital, combining the very latest advancements in AI technology with human experts becomes almost impenetrable to even the most sophisticated fraudster.
The importance of a hybrid approach to verification security
We know that AI and machine learning can quickly and accurately recognise identity documents, extract the relevant data and use biometrics to compare facial features, however, a hybrid approach can take this one step further. When technological checks are combined with the knowledge of a human identification specialist, not only do businesses and their customers have double the safety net and half the risk, the conversion rate for an onboarding customer is also heightened.
In reality, that means proprietary technology will use artificial intelligence to scan and recognise security features on identity documents with maximum precision, including, for example, asking the user to tilt their ID document in various directions in front of the camera so that the security features, such as holograms, become visible. Meanwhile, a specially trained Ident Specialist will check the security features during a video conversation.
BYOPC security to transform work in next five years — Gartner
Perhaps the most critical distinction to make when taking a combined machine and human approach to identity verification is the ability for humans to use their intuition to spot discrepancies in a person’s response. For example, a simple glance away from the camera can suggest to an identification specialist that the customer is being coerced, something technology would be unable to identify alone. A human can also ask social engineering questions to determine whether the customer is genuine. Using automation alone would mean some potential customers would fail the onboarding process at this stage, but by linking them up with a specialist, who is able to take identity verification one step further, conversion rates become much higher. Put simply, by combining man and machine, the highest levels of security can be achieved.
There are many options when it comes to identity verification. Automated approaches alone continue to be sufficient for many, however, in order to safeguard against increasingly sophisticated threats while increasing customer conversions, it is time to move beyond the minimum. We must explore what we can learn from and how we can replicate the world-leading BaFin regulatory landscape Germany is renowned for to ensure the UK’s finance sector remains robust and able to challenge the growing global threat of by cyber-crime and fraud.
social experiment by Livio Acerbo #greengroundit #thisisnotapost #thisisart