To gather accurate and reliable information that can help them to spot potential risks, security teams are now turning to publicly available information.
Earlier in the year, researchers from Binghamton University revealed that many of the Zoombombing incidents that were so prevalent during the early months of the Covid pandemic were actually the result of organizational insiders. The study showed that the access details of important meetings were often not hacked by outsiders but rather shared by disgruntled employees who have legitimate access to the meetings.
Indeed, an investigation by The Economist recently found that cybercriminals were offering up to eight-figure sums to tempt employees at Wells Fargo, Bank of America, and JPMorgan Chase to authorize illegal and fraudulent wire transfers.
Insider threats are undoubtedly among the more challenging risks faced by organizations, but the pandemic has meant that such risks are on the rise as employees are placed under increasing stress and burnout, which make them vulnerable to either inappropriate activity or being compromised by external groups. To conduct a robust and reliable insider threat program, it’s vital that security teams have accurate and reliable information that can help them to spot potential risks.
To provide such signals, many organizations are turning to publicly available information (PAI), which while an incredibly rich source of information is also incredibly complex. The richness of data available via PAI is both its great benefit but also its great disadvantage as it can be incredibly difficult to find what you truly need among the vast quantity of data that is available.
Security teams face a range of challenges when using PAI, including errant signals, false positives, and the considerable amount of time and money needed to accurately corroborate the information.
Despite this, insider threat initiatives have proliferated across organizations, and not least across public agencies. This is largely as a result of Executive Order 13587, which the Obama administration introduced in 2011 to call for wider implementation of insider threat programs.
The growth has also been facilitated by the significant increase in both digital information about people and also the range of tools designed to help security teams access and analyze the information. Of course, there has also been growth in tools designed to help insiders anonymize and hide their activities online.
Despite this, the adoption has not been as quick as it otherwise might have been. A number of factors underpin this, not least of which is the accuracy of information found on social media and the ability to attribute it to the right source. For instance, it’s not unknown for criminals to hack into otherwise legitimate social media accounts to leak sensitive information or share misinformation.
Another key challenge revolves around the privacy of employees and their freedom of speech.
If open-source information is to be used at scale, it’s important that not only are such programs done in accordance with transparent privacy guidelines, but that any individuals concerned are given the opportunity to address any concerns they may have.
There are also inevitable scaling challenges as you look to utilize publicly available information on potentially thousands of employees rather than the handful you may begin the program with.
Technology can often help with these challenges, particularly in terms of scaling up the PAI program, as technologies such as artificial intelligence and identity resolution are ideally suited to monitoring information at scale. Such technologies can greatly assist with understanding both the authenticity of content but also the risk posed by an individual.
It can also tap into information from a wide range of sources, which combined with the real-time nature of information gathering can give both a depth and breadth of analysis that is largely impossible to achieve using more manual means.
The use of AI is largely essential to successful identity resolution, and therefore will be crucial to any insider threat program.
AI is especially potent when combined with security personnel with not only deep subject matter and operational expertise, but also insight into the deep and dark web, as it is often these channels that are used to sell information. The information gained on these channels will almost always require manual verification before it can be accurately attributed to an individual or individuals.
While using open source intelligence is by no means easy or straightforward, if organizations can master it then it can transform their insider threat detection. By constructing robust and transparent review guidelines along with the diligent use of the latest technologies and human oversight, organizations can rapidly assess, verify, and identify potential threats posed by employees, suppliers, and contractors.