OW Response Time on AS400

lerwick

Member
Hi,

We have the following setup

B7332 Coexistance, database on AS400 9406-S30; NT applications server, and
Citrix, running mainly Financials and Job Cost.
About 70 concurrent world users and the same for OW.

In OneWorld everything seems a bit slower than World. In World I can
quantity this with response time figures from AS400 performance monitor.
Performance monitor doesn't report any response time figures for the SQL
requests from OW though so I can't easily see what the response time is. I
am wondering how people using AS400's, or even anyone using OW are
measuring response time? It used to be so easy with World using performance
monitor!

I started off looking at the difference between a job status inquiry in
World and OW and found that in World it took about 15 seconds, and in OW it
took about 5 minutes. So far using DBMON and job logs I have found that
World uses a permanent join file for the query and OW creates a hash table
during the inquiry. I was going to look at creating a LF as per the job log
to see if that speeds up the transaction. Maybe I need to get the OW code
changed as well to use the same join file as World? At least I understood
why it was slower after finding out it was completing the query in
different ways.

Another odd thing seemed to happen when I sent the SQL statement using
Operations Navigator - the same query that took about 5 minutes in OW
completed in about 1 minute through operations navigator. The SQL package
and job log showed the SQL optimiser was creating a hash table in OW and a
creating a logical file through Ops Nav SQL. I haven't figured out why the
optimiser behaves differently when the same SQL statement arrives either
way.

thanks for any thoughts,

Lerwick Harding
Alstom NZ
 
Database Monitor is the most accurate way to measure OneWorld response time
on an AS/400. The results are different in nature than performance monitor.
You are bombarded with multiple messages per SQL statement and a great deal
more in-your-face data. The meaning of the data is not widely understood.
With the exception of some 3rd-party products, there are no analysis tools.
The results do not adjust or try to eliminate communication time. Durations
are usually reported in microseconds.

I am not aware of any tools that uniformly identify and measure the time
from clicking a button until a result comes back on a Windows application
like OneWorld. There are several screen playback tools that allow you to
start a clock when you click Okay and stop the clock when "the screen comes
back" (you decide what that means). Mercury Winrunner and MacroScheduler
both support such measurements. These applications don't do this kind of
measurement universally - you have to play a script that they understand and
put the start and stop actions into the script. There is no requirement
that the clocks on the AS/400. A set of SQL statements that start at time X
on the PC may appear to start at time Y on the AS/400. This forces you to
slide the reports up and down until the actions on each side match up. The
PC where the measurement takes place are synchronized and the clock
granularity on a PC is reported as 1 millisecond but seems to be much larger
than that - somewhere between 10 and 60 milliseconds. Since network times
are about the same size, this makes it difficult to factor in or out the
network time in a transaction.

Seems to me that the right solution is a tool composed of two parts - one
for the client and one for the server. When the tools start up, they
synchronize their clocks to one value. Each one captures start and end
events. The events are triggered either by the OneWorld application or by
Client Access - or both. The tool should be lightweight. This would allow
it to run for days or forever. Data should be sent to the AS/400 for
storage - bigger disks. Integration with sniffers and other monitoring
tools should be supported. The tool set should work with fat clients,
Citrix, and HTML/Java. There should be server-side monitors for AS/400,
RS/6000, HP-UX, Sun, NT/2000, and MVS and client-side monitors for Client
Access, SQL Plus/NET8, and MS ODBC. Analysis tools should be written to
monitor the obvious metrics such as response time and throughput and to look
for anomalies such as slow network, index builds, tables scans, high CPU
utilization, and so forth. JDE should write this tool because measurements
like this would be invaluable for JDE to support their applications, no
single tool that provides all this information exists today, and no one else
can insert the necessary measurement hooks.

By the way, I think that this tool is done and has been around for three or
four years. Perhaps someone should ask JDE ...

Richard Jackson
(speaking only for myself)
 
Lerwick

We found that tuning the AS400 memory pools had a significant effect on OneWorld response time on our S20.

Tony St. Pierre
WEL Energy
Hamilton
New Zealand
 
Lerwick,

I have added about 24 new views (we prefer to use OW as this ensures
changes do not interfere with upgrades/esu's), and have not had any
problems, and have definately seen improvements. For example the sales
order history we were having 90-200 seconds response time to some of the
qbe's the users use regularly. After confirming that there was a business
reason to do these qbe's on a regular basis we added a couple of business
views containin the information and the response went to 30-45 seconds.

Tom Davidson


I started off looking at the difference between a job status inquiry in
World and OW and found that in World it took about 15 seconds, and in OW it

took about 5 minutes. So far using DBMON and job logs I have found that
World uses a permanent join file for the query and OW creates a hash table
during the inquiry. I was going to look at creating a LF as per the job log

to see if that speeds up the transaction. Maybe I need to get the OW code
changed as well to use the same join file as World?





OW 7332 SP 11.3VER, NT 4.0 SP 5, TSE 4.0 SP 4, Metrframe 1.8, CO SQL 7.0
 
Back
Top