• Introducing Dark Mode! Switch by clicking on the lightbulb icon next to Search or by clicking on Default style at the bottom left of the page!

9.2.1.0 HEAP and user session timeout

Michael L.

Well Known Member
Hello List. We are tentatively scheduled to ‘go live’ on application release 9.2, tools release 9.2.1.0 toward the end of this month. I’ve mostly worked in a ‘400’ / WebSphere environment and just started a full time job in a soon to be SQL 2014 / WLS environment. I am hoping you could share your expertise with me and answer a couple questions.

I found some knowledge on MOS to not exceed 60 minutes for the user session timeout but not sure if that still applies on TR 9.2.x. Currently our user session timeout is set to 125 minutes and I often see users reach this limit in 9.1.
Based on our configuration below any pros/cons of leaving at 125 minutes?

I have two Production Web Servers, with 3 HTML server instances on each. We have between 230 soon to be just over 300 concurrent users. There is a WLS tuning doc on MOS. It states, the minimum and the maximum heap size being set to 2GB of memory for a dedicated web server with 4GB of total physical memory. I know it’s best to analyze, measure, change, analyze but as a starting point what would you recommend for the min/max heap size based on our users count and web server configurations below?

Thanks in advance for any responses.

Application release: 9.2
Tools Release: 9.2.1.0
E1 servers OS: Windows Server 2012 R2 Datacenter
DB: SQL 2014
Two Oracle Web Logic Server 12c each with 4 virtual Processors, 2.90GHz and 16GB of ram.
 

RussellCodlin

Reputable Poster
Is this an upgrade? So are you currently running 9.1 on a similar set up or is this a new implementation?

Without a lot of detail you look under cooked for that user count and session timeout. With just 3 instances you should be able to bump your heap size up to 4GB or alternatively drop another 3 instances on to each virtual host. Even then I'd want at least another virtual host added to the pool for 300 concurrent. What sort of usage do you see? Are there a lot of grid imports going on or just normal data entry? Do you have sales order entry in any volume or just financials? These things will impact how your load patterns.

For this sort of system have you done any automated load testing to check performance because you can annoy a lot of people very quickly if your JAS instances drop dead through memory errors or get bogged down to unusable through constant GC.
 

ice_cube210

VIP Member
Adding to what Russell said , you can easily bump up to 4GB JVMs. And run 3 JVMs. The math would be something like this. 4 GB for each JVM , plus 2 GB for WLS Admin Server / Node Manager etc , 2 GB for the OS . 1 CPU per JVM and 1 CPU for the OS.

Each 4GB JVM can handle anything from 25 (conservative estimate - heavy users) to 40 users ( light users) . So with 6 JVMs you have the capacity for anything between 150 to 240 Users depending on their use. So if you are going to go up to 300 concurrent users I would definitely add at least one more web server.
 

glen7x

Active Member
I would recommend doing a load test and closely monitoring the system for each change that you make.
Increasing the heap size to a larger value would cause garbage collection to run longer and you may not want that. At one of my projects, I faced this when we had kept the heap size to 4GB; we were playing with values ranging from 1.5 to 4 G. The sweet spot for us in that setup as a heap size of 3GB.
Regarding the time out, well its common for users to request, fight and what not for a higher user timeout value and in most cases in the user community has their say. AFAIK, the 60 minute recommendation still holds good. Too large a timeout value, you will have a lot of idle users in the system occupying server memory space and you do not want that either. Set a realistic expectation with the users.
 

Tom_Davidson

VIP Member
Corporate Audit and Corporate Security can be your friends in keeping the timeout to a reasonable value. Ours is set to 60, and the users weren't too happy, but both these groups gave us their backing.

Tom
 
Top