E9.0 Go-to-End (Go to End) Performance?

fbrammer

fbrammer

Active Member
I think our users are expecting way too much from the Go-to-End functionality, but I'm trying to get some guidance.

Obviously different interfaces perform dramatically different depending on number of columns, business function calls, attachments and etc. We're also very aware of all the warnings around go-to-end performance and system impact. But I'm trying to get an idea or a reasonable gauge on how long it should take to bring back X number of rows in a given interface.

P09200 (search based on a specific account, date range, and doc type):
10K rows ~ 40 seconds
75K rows ~ 5 minutes

I realize there are infinite factors affecting such performance. I'm just trying to get an idea of what is good/bad/reasonable/normal on go-to-end when the system is returning thousands (tens of thousands) of rows. If most other systems can return 50K rows in 10-30 seconds, then we have something to work on, but even Oracle Support isn't finding anything wrong.

We're on JDE E1 9.0, Tools: 9.1.5. Oracle 11g SE, Web Logic 12.3 both on Oracle Linux hosting in OCI (all related Path Code nodes are in the same AD). E1 Servers are on Windows. No servers are straining at all. (We're going to 92 soon.)

Any insight is appreciated.
 
I'm more on the functional side, but from my past experience, I don't think 40 seconds for 10K records on P09200 is unreasonable. I've never requested 75K rows on any application so I can't really speak to that.

I would ask what they are going to do with it though, once all the records are returned. Most of those screens are pretty unwieldy once you get that size, so doing any analysis is difficult. In the past we've developed a UBE to export the data to .csv, where they can deal with it easier.

If they're looking to get totals, JDE has Real Time Summarization on that program. I think this is available in 9.1.5, but you may need an ESU. But this gives you the totals without having to do a go-to-end.
 
I suspect that quite a lot of that time is spent on the client - in the browser. So if your client computer is really fast and you are using a faster browser, it would speed it up quite a bit. It's not only the server performance...
 
As you have already acknowledged this can be a bit of a "how long is a piece of string" question. To start, your performance results for the P09200 are not unusual from my experience and they look to be linear which tends to eliminate concerns at this point around JVM memory bloating on the JAS server due to the volume of data being pulled back (and accounting for other users doing the same or similar activities at the same time in their sessions). With enough users doing large grid fetches via applications or data browser you can start seeing JVM memory pressure or even heap dumping. Monitoring your JVM memory usage is always a good idea in any case.

When Find is clicked on this application here is a rough description of what the system does:

1) Issues driving query for the underlying JDE business view serving the grid and waits for DB query execution time to get initial results

2) First page of results are pulled in from DB resultset and
For each database row grid event rule logic is fired
within this logic can be a lot of processing. There will be database lookups for associated fields such as UDCs (some of this going through cache of course)
business functions will be called to validate and format data and if multicurrency is turned on there will likely be additional processing for each ledger record. Some of this processing is done on the web server where the functions are "plugins" such as decimal formatting while others (the majority) will be requests to the enterprise (logic) server for business function execution.


When scroll-to-end is initiated the row fetch/grid processing above is repeated until the DB result set is fully read.

Based on the above I hope it is clear that you cannot compare the fetch performance of different grids considering that they use different business views have have more or less grid fetch event rule and business function logic. From a tuning perspective I would want my database returning results as fast as possible and my business function execution as fast as possible. (Assuming that money can buy the underlying hardware to deliver the best possible performance from the JDE architecture and code).

As Alex has pointed out there can also be an element of browser performance. In my experience it is generally not a primary source of slowdown barring any incompatible browser versions or interactions with other browser plugins or anti-virus software. Certainly faster JavaScript execution is going to help performance.

Since you are already on tools 9.1.5 I don't think there will be massive improvements with the 9.2 base. There were grid performance improvements delivered in tools 9.1.4. For an interesting read that drills into what I have described above around grid fetch and business functions execution on both the enterprise server and via Java JAS plugins can be found in this support document " E1: GRID: JAS: JD Edwards EnterpriseOne Tools Release 9.1.4 Grid Performance Improvement (Doc ID 1615005.1) "
 
It's not just P09200. We have three interfaces were users regularly do go to end. The other two are Item Availability, I think we have mitigated that with a custom Summary Interface and Sales Order Entry. Unfortunately, the users come at the Sales Order queries from so many angles it's hard to create a summary interface for them, and it's the slowest of all of them. We have a custom SOE interface. There are lots of function calls and attachments...

What I'm trying to gather is are we as good as we can get from a server/config "tuning" perspective?
 
Your performance results are reasonable based on my experience and answering this question broadly as a generic scroll-to-end question independent of any specific application or dataset.

If you see large differences in performance tied to QBE line criteria then on that I would definitely look at the database, query plans and index use. I had a custom maintenance planning screen where work orders and related information was being filtered in 25 different ways through saved queries. It took months to come up with the right combination of indexes, store outlines and other DB tuning to give relatively consistent fetch performance. We had already assured that we were not seeing bottlenecks in JAS server JVM utilization or Call Object kernels. In a virtualized environment which is pretty much the norm these days moving the VM hosts to a higher clock speed (at significant cost) also improved business function execution overall. In a cloud environment there is only so much you have control of or visibility into.
 
To the point on the database performance. I've captured the SQL with parameters and executed the same SQL in a "Client Server" style SQL Interface and it takes 1-2 seconds to bring back all the records in any given example. I should have mentioned that before. I'm pretty sure we're as good as we can get and I just need to put this back on the Dev Team to develop better tools for the users.
 
> ... there can also be an element of browser performance ...

Thanks, Justin, it appears I was referring to how it was behaving long ago: I had another look and the browser really did not have much to process now, so those enhancements must have completely removed this as a potential issue.
 
Back
Top