The PARIS model (named because it looked a bit like a certain major landmark when we first drew it) is a model that expresses what we think good threat hunting is all about.
At its core, threat hunting is about the ability to execute a set of hunt use cases (or hypotheses) to find evidence of attacks. The result of this is a lot of data and indicators that will have a varying degree of confidence about whether it actually represents something bad or not. Some things will have a very high confidence, perhaps enough to reach the top of the triangle and create an actual automated alert of known bad.
But there will be lots of things which have a low confidence, and in fact chances are you will have more of these than the high confidence alerts. Sorting through these indicators requires a human to perform more investigation and determine if they constitute a real threat. You may have a certain amount of automation in collecting these indicators, and deciding an initial confidence level (though far below 99%) to help hunters prioritise and make the work more efficient, but you aren't creating something fully automated here, you need people actively hunting too.
But it is important that threat hunters do not simply execute a finite set of unchanging use cases. Otherwise attackers only need switch techniques to fall under the radar. Rather, hunters should be generating use cases all the time to add real human value. The software of course needs to facilitate this and allow a wide range of use cases to be executed. Should certain use cases yield particularly distinctive results, they may be fed back into the automation to highlight such cases to threat hunters in future with a higher priority, or simply speed the process of executing it in future.
But critically, this is all built on a broad base of security expertise that drives the use cases which are generated. This includes things like feedback from red teams on how attacks were successful, reviewing public research on new attack techniques, feeding back results from incident response work, independent research on attacks or platform behaviour and so on.
And so this broad security experience is driven up through the model into use case generation, execution, analytics, appropriate automation and, where possible, an automated alert.
The top part of the diagram in grey is primarily focussed on technology, using automation and analytics to make the work more efficient. The bottom part is focussed on people to drive the whole process.
We then view threat hunting capability as broadly starting at the top and progressing down. At the top you only have high confidence alerts; this is an old school, reactive approach using anti-virus, IDS etc. Below this you have the ability to look at various indicators that have varying degrees of confidence, but which require further investigation to be sure of. Next you are actually executing a set of pre-defined use cases, these may be built in to an EDR tool for example. Finally, you have the ability to generate new use cases from R&D and the ability to execute them. This final level is the capability required for the best threat hunting. You will notice that this bottom layer is also the thickest in the diagram.
Interestingly, you can also imagine progressing up the diagram too for the ongoing development of threat hunting platform. As research creates use cases that generate findings, you gradually apply additional analytics and automation as appropriate to make your future work more efficient.