You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We already have suppression of literally duplicate wordlist rules (although there may be bugs with it, see #5011), however we do not yet suppress rules that look different but are effectively duplicate (redundant).
https://github.com/mhasbini/duprule is one external project that does this, so we could consider their approach (vs. or as well as other ideas):
How does it works ?
TL;DR: Each rule change is mapped, and a unique id is generated for each rule with functions count.
The mechanism is like this:
A blank map is created with N ( from 1 to 36 ) slots.
Each rule change will be applied to the map. Example rule: 'u', will change all characters cases from '?' ( unknown ) to 'u' ( upper case ). 'sab', will add {'a' -> 'b'} to the map. And same logic apply for the other rules.
An id is generated from the map.
The ids are compared to detect duplicate rules.
The rule with the least functions count will be selected.
The text was updated successfully, but these errors were encountered:
We already have suppression of literally duplicate wordlist rules (although there may be bugs with it, see #5011), however we do not yet suppress rules that look different but are effectively duplicate (redundant).
https://github.com/mhasbini/duprule is one external project that does this, so we could consider their approach (vs. or as well as other ideas):
The text was updated successfully, but these errors were encountered: