Mantis Bugtracker

Viewing Issue Advanced Details Jump to Notes ] View Simple ] Issue History ] Print ]
ID Category Severity Reproducibility Date Submitted Last Update
0003872 [Resin] major always 02-03-10 11:42 02-17-10 14:04
Reporter chiefgeek View Status public  
Assigned To ferg
Priority normal Resolution fixed Platform
Status closed   OS
Projection none   OS Version
ETA none Fixed in Version 4.0.4 Product Version 4.0.1
  Product Build
Summary 0003872: session data is replicated between cluster instances in resin-data dir when persistent sessions are disabled
Description We do not use session data for anything it has become very difficult to restart a resin instance without stopping all instances and deleting the db files in resin-data. If we do not delete the db files in this dir it can take an hour or more for the app to startup.

We use apache with multiple CauchoHost and CauchoBackup entries.
For example:
--- begin apache.conf snippet ---
<Location /dealfinder/>
  CauchoHost x.x.x.1 6841
  CauchoHost x.x.x.2 6841
  CauchoHost x.x.x.3 6841
  CauchoHost x.x.x.4 6841
  CauchoBackup x.x.1.1 6841
  CauchoBackup x.x.1.2 6841
  CauchoBackup x.x.1.3 6841
  CauchoBackup x.x.1.4 6841
===== end apache.conf snippet ===
env variables
--- begin resin.xml snippet ----
 MYIP=<ip address of machine>

Then in resin.xml on each host there is a cluster def as follows:
    <cluster id="dealfinder">
        <!-- sets the content root for the cluster, relative to resin.root -->

        <!-- defaults for each server, i.e. JVM -->
            <!-- The http port -->
            <http address="*" port="7841" />

        <!-- define the servers in the cluster -->
        <server id="dealfinder" address="${MYIP}" port="6841" watchdog-port="6700">
            <!-- server-specific configuration, e.g. jvm-arg goes here -->

        <!-- the default host, matching any host name -->
        <host id="" root-directory=".">
            <access-log path="instances/dealfinder/access.log"
                        format='%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"' rollover-period="1W" />
            <web-app id="/" root-directory="webapps/dealfinder/ROOT" archive-path="webapps/dealfinder/ROOT.war">

            <web-app id="/resin-admin" root-directory="${resin.root}/doc/admin">
                    <resin:set var="resin_admin_external" value="false" />
                    <resin:set var="resin_admin_insecure" value="true" />
            <web-app id="/resin-doc" root-directory="${resin.root}/doc/resin-doc" />

==== end resin.xml snippet ====
Each cluster def only lists one host, itself.
Steps To Reproduce
Additional Information
Attached Files

- Relationships

- Notes
02-03-10 12:57

Do you have the &lt;use-persistent-store/> in the session-config in a &lt;web-app-default>? I just checked here and the default is off, but the sample resin.xml includes a &lt;web-app-default> which enables the persistent sessions.

The <persistent-store> itself doesn't really do anything currently. That top-level configuration is always enabled, because Resin shares information across the cluster outside of the session store. (But that data is very small, so shouldn't create a large *.db file.)
02-03-10 13:03

The docs say to put this in to enable it
That causes a parse error. Originally I had nothing and then I tried this with no change:

02-03-10 13:30

I noticed the following in the log with debug=finer

[10-02-03 16:16:36.791] {main} Database[null] active
[10-02-03 16:16:36.798] {main} Database[/usr/local/resin/resin-data]: SELECT id, value, cache_id, flags, expire_timeout, idle_timeout, lease_timeout, local_read_timeout, update_time, server_version, item_version FROM resin_mnode_dealfinder WHERE 1=0
[10-02-03 16:16:36.873] {main} Database[/usr/local/resin/resin-data]: SELECT MAX(server_version) FROM resin_mnode_dealfinder
[10-02-03 16:16:36.874] {main} Database[/usr/local/resin/resin-data]: SELECT MAX(update_time) FROM resin_mnode_dealfinder
[10-02-03 16:16:36.875] {main} Database[/usr/local/resin/resin-data]: DELETE FROM resin_mnode_dealfinder WHERE update_time + 5 * idle_timeout / 4 < ? OR update_time + expire_timeout < ?
[10-02-03 16:16:36.882] {main} Database[/usr/local/resin/resin-data]: SELECT id, expire_time, data FROM resin_data_dealfinder WHERE 1=0
[10-02-03 16:16:36.885] {main} Database[null] active
[10-02-03 16:16:36.885] {resin-39} Database[/usr/local/resin/resin-data]: VALIDATE resin_data_dealfinder
[10-02-03 16:16:36.886] {resin-39} Database[/usr/local/resin/resin-data]: SELECT value, resin_oid FROM resin_mnode_dealfinder WHERE resin_oid > ?
[10-02-03 16:16:36.886] {resin-39} Database[/usr/local/resin/resin-data]: UPDATE resin_data_dealfinder SET expire_time=? WHERE id=?
[10-02-03 16:16:36.887] {resin-39} Database[/usr/local/resin/resin-data]: DELETE FROM resin_data_dealfinder WHERE expire_time < ?

and here is a ls -l from a production server
-rw-rw-r-- 1 root root 192K 2010-02-02 08:58 resin_data_flights.db
-rw-rw-r-- 1 root root 192K 2010-01-29 18:41 resin_data_json.db
-rw-rw-r-- 1 root root 123M 2010-02-03 16:27 resin_mnode_flights.db
-rw-rw-r-- 1 root root 115M 2010-02-03 16:27 resin_mnode_json.db
-rw-rw-r-- 1 root root 128K 2010-02-02 08:58 temp_file_flights
-rw-rw-r-- 1 root root 128K 2010-01-29 18:41 temp_file_json
02-03-10 13:43

That log is normal, because the store needs to GC/delete expired entries periodically.

However, the size of the mnode files is strange since you're not using the persistent store. The data files are about what I'd expect, but the mnodes should be about the same size (about 1M or less.) That helps a bit.
02-17-10 14:04


- Issue History
Date Modified Username Field Change
02-03-10 11:42 chiefgeek New Issue
02-03-10 12:57 ferg Note Added: 0004408
02-03-10 13:03 chiefgeek Note Added: 0004409
02-03-10 13:30 chiefgeek Note Added: 0004410
02-03-10 13:43 ferg Note Added: 0004411
02-17-10 14:04 ferg Note Added: 0004432
02-17-10 14:04 ferg Assigned To  => ferg
02-17-10 14:04 ferg Status new => closed
02-17-10 14:04 ferg Resolution open => fixed
02-17-10 14:04 ferg Fixed in Version  => 4.0.4

Mantis 1.0.0rc3[^]
Copyright © 2000 - 2005 Mantis Group
37 total queries executed.
30 unique queries executed.
Powered by Mantis Bugtracker