Mantis - Resin
Viewing Issue Advanced Details
3872 major always 02-03-10 11:42 02-17-10 14:04
chiefgeek  
ferg  
normal  
closed 4.0.1  
fixed  
none    
none 4.0.4  
0003872: session data is replicated between cluster instances in resin-data dir when persistent sessions are disabled
We do not use session data for anything it has become very difficult to restart a resin instance without stopping all instances and deleting the db files in resin-data. If we do not delete the db files in this dir it can take an hour or more for the app to startup.

We use apache with multiple CauchoHost and CauchoBackup entries.
For example:
--- begin apache.conf snippet ---
<Location /dealfinder/>
  CauchoHost x.x.x.1 6841
  CauchoHost x.x.x.2 6841
  CauchoHost x.x.x.3 6841
  CauchoHost x.x.x.4 6841
  CauchoBackup x.x.1.1 6841
  CauchoBackup x.x.1.2 6841
  CauchoBackup x.x.1.3 6841
  CauchoBackup x.x.1.4 6841
</Location>
===== end apache.conf snippet ===
env variables
--- begin resin.xml snippet ----
 MYIP=<ip address of machine>

Then in resin.xml on each host there is a cluster def as follows:
    <cluster id="dealfinder">
        <!-- sets the content root for the cluster, relative to resin.root -->
        <root-directory>.</root-directory>

        <!-- defaults for each server, i.e. JVM -->
        <server-default>
            <!-- The http port -->
            <http address="*" port="7841" />
            <jvm-arg>-Xmx1024m</jvm-arg>
            <jvm-arg>-XX:MaxPermSize=192m</jvm-arg>
            <jvm-arg>-Dcom.sun.management.jmxremote.port=8841</jvm-arg>
            <watchdog-arg>-Dcom.sun.management.jmxremote</watchdog-arg>
        </server-default>

        <!-- define the servers in the cluster -->
        <server id="dealfinder" address="${MYIP}" port="6841" watchdog-port="6700">
            <!-- server-specific configuration, e.g. jvm-arg goes here -->
                    </server>

        <!-- the default host, matching any host name -->
        <host id="" root-directory=".">
            <access-log path="instances/dealfinder/access.log"
                        format='%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"' rollover-period="1W" />
            <web-app id="/" root-directory="webapps/dealfinder/ROOT" archive-path="webapps/dealfinder/ROOT.war">
                <listener>
                    <listener-class>com.caucho.jsp.JspPrecompileListener</listener-class>
                    <init>
                        <extension>jsp</extension>
                        <extension>jspx</extension>
                        <extension>xtp</extension>
                    </init>
                </listener>
            </web-app>

            <web-app id="/resin-admin" root-directory="${resin.root}/doc/admin">
                <prologue>
                    <resin:set var="resin_admin_external" value="false" />
                    <resin:set var="resin_admin_insecure" value="true" />
                </prologue>
            </web-app>
            <web-app id="/resin-doc" root-directory="${resin.root}/doc/resin-doc" />

        </host>
    </cluster>
==== end resin.xml snippet ====
  
Each cluster def only lists one host, itself.

Notes
(0004408)
ferg   
02-03-10 12:57   
Do you have the &lt;use-persistent-store/> in the session-config in a &lt;web-app-default>? I just checked here and the default is off, but the sample resin.xml includes a &lt;web-app-default> which enables the persistent sessions.

The <persistent-store> itself doesn't really do anything currently. That top-level configuration is always enabled, because Resin shares information across the cluster outside of the session store. (But that data is very small, so shouldn't create a large *.db file.)
(0004409)
chiefgeek   
02-03-10 13:03   
The docs say to put this in to enable it
<use-persistent-store="true"/>
That causes a parse error. Originally I had nothing and then I tried this with no change:

       <use-persistent-store>false</use-persistent-store>
(0004410)
chiefgeek   
02-03-10 13:30   
I noticed the following in the log with debug=finer

[10-02-03 16:16:36.791] {main} Database[null] active
[10-02-03 16:16:36.798] {main} Database[/usr/local/resin/resin-data]: SELECT id, value, cache_id, flags, expire_timeout, idle_timeout, lease_timeout, local_read_timeout, update_time, server_version, item_version FROM resin_mnode_dealfinder WHERE 1=0
[10-02-03 16:16:36.873] {main} Database[/usr/local/resin/resin-data]: SELECT MAX(server_version) FROM resin_mnode_dealfinder
[10-02-03 16:16:36.874] {main} Database[/usr/local/resin/resin-data]: SELECT MAX(update_time) FROM resin_mnode_dealfinder
[10-02-03 16:16:36.875] {main} Database[/usr/local/resin/resin-data]: DELETE FROM resin_mnode_dealfinder WHERE update_time + 5 * idle_timeout / 4 < ? OR update_time + expire_timeout < ?
[10-02-03 16:16:36.882] {main} Database[/usr/local/resin/resin-data]: SELECT id, expire_time, data FROM resin_data_dealfinder WHERE 1=0
[10-02-03 16:16:36.885] {main} Database[null] active
[10-02-03 16:16:36.885] {resin-39} Database[/usr/local/resin/resin-data]: VALIDATE resin_data_dealfinder
[10-02-03 16:16:36.886] {resin-39} Database[/usr/local/resin/resin-data]: SELECT value, resin_oid FROM resin_mnode_dealfinder WHERE resin_oid > ?
[10-02-03 16:16:36.886] {resin-39} Database[/usr/local/resin/resin-data]: UPDATE resin_data_dealfinder SET expire_time=? WHERE id=?
[10-02-03 16:16:36.887] {resin-39} Database[/usr/local/resin/resin-data]: DELETE FROM resin_data_dealfinder WHERE expire_time < ?

and here is a ls -l from a production server
-rw-rw-r-- 1 root root 192K 2010-02-02 08:58 resin_data_flights.db
-rw-rw-r-- 1 root root 192K 2010-01-29 18:41 resin_data_json.db
-rw-rw-r-- 1 root root 123M 2010-02-03 16:27 resin_mnode_flights.db
-rw-rw-r-- 1 root root 115M 2010-02-03 16:27 resin_mnode_json.db
-rw-rw-r-- 1 root root 128K 2010-02-02 08:58 temp_file_flights
-rw-rw-r-- 1 root root 128K 2010-01-29 18:41 temp_file_json
(0004411)
ferg   
02-03-10 13:43   
That log is normal, because the store needs to GC/delete expired entries periodically.

However, the size of the mnode files is strange since you're not using the persistent store. The data files are about what I'd expect, but the mnodes should be about the same size (about 1M or less.) That helps a bit.
(0004432)
ferg   
02-17-10 14:04   
server/01o4