Hi All,
We set up stunnel from two linux machines located at DMZ to a HPUX cluster located at Private Zone. There is firewall in between. We use stunnel to encrypt Tomcat AJP traffic from apache web servers on linux machines to JBoss app server a HPUX cluster. The stunnel run at HPUX cluster was started as deamon mode and listened on a cluster failover IP instead of host IP. We found that the stunnel process at HPUX cluster consumed 99% CPU after run for a few days (started at Oct 26 10:26 and it was reported 99% CPU at Oct 29 23:00, under a very low traffic. Can anyone help?
The debug log at HPUX is also attached, as shown in the log there was no requeset on Oct29 but the CPU still shot up to 99%.
Cheers, Gavin
Output of "stunnel -version" from Linux side ================================== stunnel 4.04 on i386-redhat-linux-gnu PTHREAD+LIBWRAP with OpenSSL 0.9.7a Feb 19 2003
Global options cert = /etc/stunnel/stunnel.pem ciphers = ALL:!ADH:+RC4:@STRENGTH debug = 5 key = /etc/stunnel/stunnel.pem pid = /var/run/stunnel.pid RNDbytes = 64 RNDfile = /dev/urandom RNDoverwrite = yes session = 300 seconds verify = none
Service-level options TIMEOUTbusy = 300 seconds TIMEOUTclose = 60 seconds TIMEOUTidle = 43200 seconds
Output of "uanme -a" from Linux side =========================== Linux www12-id.spica.hksarg 2.4.21-15.ELsmp #1 SMP Thu Apr 22 00:18:24 EDT 2004 i686 i686 i386 GNU/Linux
Output of "gcc -v" from Linux side ======================== Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.3/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --host=i386-redhat-linux Thread model: posix gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-39)
Output of "openssl version" from Linux side =============================== OpenSSL 0.9.7a Feb 19 2003
"stunnel conf" at Linux side ================== chroot = /u01/var/run/stunnel # PID is created inside chroot jail pid = /deptstunnel.pid setuid = nobody setgid = nobody
socket=r:SO_KEEPALIVE=1
# Some debugging stuff debug = 7 output = /u01/var/stunnel/deptstunnel.log foreground = no
# Use it for client mode client = yes
# Service-level configuration [App] accept = localhost:8111 connect = hpux-service-ip:8111
Output of "stunnel -version" from HPUX side ================================ stunnel 4.05 on hppa2.0w-hp-hpux11.11 PTHREAD+LIBWRAP with OpenSSL 0.9.7d 17 Mar 2004
Global options cert = /opt/iexpress/stunnel/etc/stunnel/stunnel.pem ciphers = ALL:!ADH:+RC4:@STRENGTH debug = 5 EGD = /var/run/egd-pool key = /opt/iexpress/stunnel/etc/stunnel/stunnel.pem pid = /opt/iexpress/stunnel/var/run/stunnel.pid RNDbytes = 64 RNDoverwrite = yes session = 300 seconds verify = none
Service-level options TIMEOUTbusy = 300 seconds TIMEOUTclose = 60 seconds TIMEOUTidle = 43200 seconds
Output of "uname -a" from HPUX side =========================== HP-UX dptshr1 B.11.11 U 9000/800 2640250230 unlimited-user license
Output of "openssl version" from HPUX side =============================== OpenSSL 0.9.7d 17 Mar 2004
"stunnel conf" at HPUX side ================== cert = /u01/etc/stunnel/deptstunnel.cer key = /u01/etc/stunnel/deptstunnel.key chroot = /u01/var/run/stunnel/ # PID is created inside chroot jail pid = /deptstunnel.pid setuid = stunnel setgid = stunnel
# Some debugging stuff debug = 7 output = /u01/var/stunnel/deptstunnel.log foreground = no
# Use it for client mode #client = yes
# Service-level configuration
[App] accept = hpux-service-ip:8111 connect = localhost:9111
=== ENDS ===
On 2004-11-03, at 18:57, gykho@ogcio.gov.hk wrote:
We found that the stunnel process at HPUX cluster consumed 99% CPU after run for a few days (started at Oct 26 10:26 and it was reported 99% CPU at Oct 29 23:00, under a very low traffic. Can anyone help?
In stunnel 4.06 I'll include a heuristic workaround for this problem and some diagnostic code. Be patient. 4.06 is going to be released soon.
You're not the first one to report HP-UX problems. I doubt it's something specific to stunnel only.
How many CPUs does your system have?
Best regards, Mike