If setAutoReload(true) is set in the configuration, and a client makes two or more requests in a short period of time (for example, an AJAX query when a page loads), the second request occasionally returns the site home page contents rather than the requested page (i.e. the topmost domain page; 'www.example.com' as opposed to 'www.example.com/my/event').
I think this is happening because interceptors/SES.cfc doesn't do any synchronization, so the second request hits the page while the configuration is reloading. The routes would be empty, so the interceptor would think the request was to the homepage (ex. 'www.myexample.com') and interpret the directory info as extra arguments (ex. 'my=event').
Why? Autoreloading of configuration files during development is a useful thing to have.
We are considering, not decided. However, it should just be used for dev. The problem we get, is so many times people leave it for production.
Okay, but what does that have to do with the issue? I understand that it should only be on during development, but it should still function correctly.
The problem is that auto reload wipes and does not lock. I am not sure adding lock overhead to the logic makes sense for a development features. That is where I am divided and we usually mention that this setting is meant for development and it can cause some side-effects that a simple fwreinit resolves. Thus, the added overhead of locking strategies to the CFC is something I don't know if it is worth it.
Is a single named lock per request really that much of an overhead? In a production server, they would all be read locks anyway, thus they wouldn't block each other.