Hello all, I've spent some time digging through the archives, but have yet to turn up a solution to the issue I'm having.
We're using Redis instances on 'cloud' resources, and have successfully gotten stunnel setup as a server on these nodes. Additionally an internal process runs and connects to a local stunnel client that then connects to the remote cloud instance. This is working well enough at the moment, however there is one issue I'm bumping into.
If the remote EC2 node goes dark for a bit (ie: reboot), the local stunnel client will reconnect, however the internal process which is establishing its connection to the local stunnel client does not get disconnected and therefore doesn't know that redis became unavailable and needs to re-subscribe to the redis stream. I need to invalidate the established session when the remote reconnects.
Can this be accomplished with either a TIMEOUT value or session statement? Below is a sample of the configuration:
########### [some-ec2-node.com] client = yes verify = 2 CApath = /etc/ssl/certs sslVersion = TLSv1 accept = 127.0.0.1:10008 connect = some-ec2-node.com:6379
########### $ stunnel4 -version stunnel 4.53 on x86_64-pc-linux-gnu platform Compiled with OpenSSL 1.0.1e 11 Feb 2013 Running with OpenSSL 1.0.1f 6 Jan 2014 Update OpenSSL shared libraries or rebuild stunnel Threading:PTHREAD SSL:+ENGINE+OCSP Auth:LIBWRAP Sockets:POLL+IPv6
Global options: debug = daemon.notice pid = /var/run/stunnel4.pid RNDbytes = 64 RNDfile = /dev/urandom RNDoverwrite = yes
Service-level options: ciphers = ALL:!SSLv2:!aNULL:!EXP:!LOW:-MEDIUM:RC4:+HIGH curve = prime256v1 session = 300 seconds sslVersion = TLSv1 for client, all for server stack = 65536 bytes TIMEOUTbusy = 300 seconds TIMEOUTclose = 60 seconds TIMEOUTconnect = 10 seconds TIMEOUTidle = 43200 seconds verify = none
-- Josh