-A PREROUTING -p tcp -m tcp --dport 9999 -j NFQUEUE --queue-balance 0:1
-A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j NFQUEUE
--queue-num 0
-A FORWARD -i eth1 -o eth0 -j NFQUEUE --queue-num 1
cat /proc/net/netfilter/nfnetlink_queue
iptables -L OUTPUT
When using Snort_inline with NFQ support, it’s likely that at some point you’ve seen messages like these on the console: packet recv contents failure: No buffer space available. When the messages are appearing Snort_inline slows down significantly. I’ve been trying to find out why.
There are a number of setting that influence NFQ performance. One of them is the NFQ queue maximum length. This is a value in packets. Snort_inline takes an argument to modify the buffer length: –queue-maxlen 5000 (note: there are two dashes before queue-maxlen).
That’s not enough though. The following settings increase the buffer that NFQ seems to use for it’s queue. Since I’ve set it this high, I haven’t been able to get a single read error anymore:
sysctl -w net.core.rmem_default=’8388608′
sysctl -w net.core.wmem_default=’8388608′
The values are in bytes. The following values increase buffers for tcp traffic.
sysctl -w net.ipv4.tcp_wmem=’1048576 4194304 16777216′
sysctl -w net.ipv4.tcp_rmem=’1048576 4194304 16777216′
Setting these values fixed all my NFQ related slowdowns. The values probably work for ip_queue as well. If you use other values, please put them in a comment below.
Thanks to Dave Remien for helping me track this down!
0gluthorcauzu1978 Yolanda Logan https://www.fadsaoilsaunas.ie/profile/gavridwassilygavrid/profile
답글삭제tairehyra