I can only talk about what I saw in testing with a real test system open to the internet.
I tested a dynamic firewall that would spot any issues and block them using fail2ban etc (as documented in the wiki page)
It spotted intrusions and dynamically blocked them
Did this work? Only partly
Why?
What appeared to be happening from a log analysis was that the sites would nibble away at their testing. I would see a block of ips that would attack in concert and appeared to be working co-operatively.
Every time I restarted, they would get a little further in their testing as my dynamic intrusion would reset.
The lesson I learnt from real world testing .... dynamic intrusion prevention is vital, but can only be part of the larger picture.
You MUST persist your blacklists.
If all sites perform dynamic testing and dynamic blocking blacklisting performs a critical role in that the framework automatically feeds the list of ips that are performing scanning into a central list. When several sites report it, that ip gets blacklisted.
This means that when you restart your firewall (or the fail2ban expires) ... you are not vulnerable to that block of ips working together to get into your system because the blacklist protects you. If you rely purely on dynamic blocking, then ultimately how long it takes them to get in becomes a function of how locked down your system is DIVIDED BY how often you restart your system (or the fail2ban expires) and again DIVIDED BY the size of the ip pool they are using in a coordinated breach attempt.
I have clear evidence from real world testing to back this statement up, and would refer you to email spam testing and blacklisting where similar problems have been examined.
I would be interested in your comparison of real world test system breach times based on pure dynamic vs dynamic backed with blacklisted systems.
My testing, as I mentioned, shows that a combination of both backlisting and dynamic blocking wins hands down.
I state this empirically.