Full Package Build Performance

Jeremy M.

Jeremy M.

Well Known Member
I compared two different Deployment server's Full Package build for the Client - Spec Build section. I ran into some extreme differences. Both are on E1 9.0 and pretty much as out of the box as it gets. Machine1 took 1:47:59 and Machine2 only took 00:13:41.

Machine1 is a virtual server, 6 Core 2.71 GHz (Single AMD Opteron), and 6GB Ram.

Machine2 is a physical server, 8 Core 2.00 GHz (Dual Quads), and 12GB Ram.

Every other aspect of the Client build was pretty close with the exception of the Compression section.

Any ideas as to why the virtual server performs so poorly on the Spec Build and Compression sections of a Full Client build?
 
No frikkin way

Machine 2 only took 13 minutes to do a full package build ?

That is pretty insane. I've never heard of a full package completing in that time.

ok - so according to your test, machine 1 and machine 2 took just a few minutes (10 minutes or so) to build the package, but machine 1 took 1'30" to do the compression and machine 2 took 3" ? Or am I missing something here ?

Remember, the virtual machine is slow at doing local-disk I/O if its not configured correctly. It might be that you need to put your TEMP folder (where the compression temporary file is being built) into something a lot faster than, say, a virtual drive.

Secondly, your configuration isn't exactly the same. Machine1 is a 6 core with only 6Gb memory. Machine2 is 8 core with 12Gb Memory. My guess is that both machines are running 64bit OS ? Or is it that only one of them is running 64bit ? Which OS ? Is there anti-virus installed on, say, machine1 ? What is the Task Manager saying the CPU is mostly pegged for on Machine1 during this process ? What other software is running at the time the compression is running ?

Could be a number of things. Lets see the answers for the above, but congratulations on a 13 minute full package build !
 
[ QUOTE ]
I compared two different Deployment server's Full Package build for the Client - Spec Build section. I ran into some extreme differences. Both are on E1 9.0 and pretty much as out of the box as it gets. Machine1 took 1:47:59 and Machine2 only took 00:13:41.

Machine1 is a virtual server, 6 Core 2.71 GHz (Single AMD Opteron), and 6GB Ram.

Machine2 is a physical server, 8 Core 2.00 GHz (Dual Quads), and 12GB Ram.

Every other aspect of the Client build was pretty close with the exception of the Compression section.

Any ideas as to why the virtual server performs so poorly on the Spec Build and Compression sections of a Full Client build?

[/ QUOTE ]

Any difference in the network connections to the database server?

I gotta say that 13 minutes sounds too low.
 
My guess is that the busbuild didn't actually run on Machine2. That it skipped the entire compile - and copied in the bin32 and lib32 from the deployment server. Check your \package\work\busbuild.log file and make sure they're IDENTICAL.

If it skipped the compile, that would absolutely explain why it was a LOT faster !
 
First, let me elaborate on my stats. I’m only speaking on the Spec Build section of a Client Full Package. The actual entire client full package took 03:22:13 on Machine1. The actual entire client full package on Machine2 took 01:14:47. I broke down the Full Client package build process into different areas based on the ClientPkgBuild.log (Initialization, Spec Build, BUS Build, Package Report, Compression, and Calculation). The areas that had large differences were the Spec Build and Compression sections.

Machine1 Spec Build – 01:47:59
Machine1 Compression – 00:48:20
Machine2 Spec Build – 00:13:41
Machine2 Compression – 00:19:52

Here are the machine OS specs:
Machine1 – Windows Server 2008 SP2 Standard 64-bit
Machine2 – Windows Server 2003 SP2 Enterprise 32-bit

Neither machine has an Anti-Virus installed yet and no other software is running at the same time of the package build. I know these machine’s don’t exactly have the same specs but I would believe the VM would run more similar to the physical machine in performance. I am trying to pin point the difference that makes the most performance impact.

My understanding is that the Client Build is performed locally on the deployment server and no Network traffic is used. During the Server Build is when network traffic comes into play with the database and enterprise server. Is that correct?

I’ll look into whether the entire comply was skipped on Machine2 and update you guys.

Thanks everyone for the responses!
 
Jeremy,

I believe that the local package build also accesses the database. I also have a couple of questions/comments.

Machine 2 has a fair bit more grunt (8 core compared with 6 core and twice the ram). Machine 1, being VM, would have to "share" at least some resources. Are there any other VMs on the same physical server that machine 1 is on?
 
Indeed, 13min is truly impressive, if of course it did do a full package build. Maybe it failed to do some parts of it? It's so short, I really doubt this number. I would double check it copied all the data and compiled all the functions.

And yes, all VM's suck for everything and at all times and in my opinion are not generally suitable for production systems.
 
Oops, I didn't read the end of this conversation.

Enterprise edition would likely have a much better disk caching, plus the physical vs. virtual is a huge difference.

The number of CPU's will have no bearing: all these processes only run in a single thread. Only the Compile part and only on the Enterprise Server would normally run many threads (unless configured not to).

Memory size will not matter either beyond 1 or 2 GB. But its speed would and by a lot. If your memory runs at 333 on S1 and @800 or more on S2, then I wouldn't be surprised in the least. Or, indeed, if there was something else agressively using either memory or disk on the physical host box of the VM server.
 
Alex,

Thanks for the information, I meant to say I was not sure whether the extra "grunt" would actually be used or not.
 
Here is the answer to the compile question. Machine1 and Machine2 are compiling. Since the Compile process runs at the same time as the Spec Build these times include running the Spec Build at the same time. Machine1 took an additional 30 minutes after the Spec Build completed and Machine2 took an additional 36 minutes after the Spec Build completed.

Machine1 – 02:17:00
Machine2 – 00:49:00

Also, to answer peterbruce; yes other machines are running on the physical server, however, it is extremely underutilized so far and the resources I mentioned are the resources carved out specifically for the Deployment server.

These two servers are performing the entire package build process successfully. I have triple checked every log I can think of. To reiterate, the 13 minutes on Machine2 is only for the Client Spec Build. The entire Client Build from start to finish took roughly 1 hour and 15 minutes.

I will double check the RAM speed and update this post tomorrow. I am pretty positive that the RAM speed is not different by that much.

Very interesting… Thanks everyone for the input so far.
 
Its physical disk vs virtual disk, as I mentioned before.

It was a little crazy stating you were doing a 13 minute package build - glad to see you updated what you're actually doing. A 1 hour package build is far more reasonable.

The time it is taking to write to the disk and to confirm that the data has been written is almost certainly where all your performance is being taken up. Remember, the virtual machine has a "software based" adapter - whereas the physical machine has the ability to pass much of its I/O to an external physical adapter.

Secondly, what is the specification of your drives (where the SPEC DB is being written to ?) - more than likely you configured your virtual machine as a big single chunk of disk from the storage resource, whereas your physical machine you either created a RAID array out of physical drives.

A virtual machine is naturally going to be far slower doing large I/O versus a physical machine. Usually, when you implement the virtual machine, it is somewhat of an upgrade compared to your predecessor physical machine. However, you're comparing an even faster machine than the virtual host - so naturally its going to be slower.
 
Jeremy,

It is a known issue that the package compression process on VMware takes thrice as long as compared to a physical build machine. I have seen this myself with some customers and tried to find the annswer to this myself. We reported thi sto our VM expert but even he couldnt figure out the problem ..
 
Jeremy,

I'm attaching an excel sheet of the analysis I had done on package builds on VM vs Physical. I'm sure your in the same situation as me. The analysis is done for client package process with 8.11 and tools release 8.97.

Regards,
Joel
 

Attachments

  • 153099-Package_performance.xls
    14.5 KB · Views: 102
Because this is really not a problem as such, it's simply a way of life and is totally to be expected. Because all VM's suck for everything and at all times, no matter brand or underlying hardware. Including the much vaunted LPAR's - they are all subject to the same exact rules. There are no miracles.

They are very good indeed for test environments, though. I practically live under VM's ;-)
 
Well, thanks everyone for the responses. I am not replacing a physical server with a VM. I am just simply comparing the performance of two separate installations. I wanted to find out if increasing the resources to the VM would really save me 1.5 hours on the client package build process or if it was really a VM vs. Physical statistic. I did expect it to be slightly slower considering the slight resource difference but I was not expecting to more than double the amount of time to build a client full package. Sorry for the initial confusion of 13 minute full package builds. I stated: “I compared two different Deployment server's Full Package build for the Client - Spec Build section.” Guess I should have clarified what I meant about Spec Build section.

Alex, thanks for the repetitive “VM’s suck” comments.
smile.gif


Joel, thanks for your stats spreadsheet. That pretty much says it all.
 
[ QUOTE ]
Jeremy,

It is a known issue that the package compression process on VMware takes thrice as long as compared to a physical build machine. I have seen this myself with some customers and tried to find the annswer to this myself. We reported thi sto our VM expert but even he couldnt figure out the problem ..

[/ QUOTE ]

I've often wondered if it is worth doing package compression any more. Back in the days of 10MBPs Ethernet it made sense. Now, I am not as sure the time saved during deployment is worth the time spent during the compression, especially if VM'ing Deployment servers is getting more common.
 
[ QUOTE ]
[ QUOTE ]
Jeremy,

It is a known issue that the package compression process on VMware takes thrice as long as compared to a physical build machine. I have seen this myself with some customers and tried to find the annswer to this myself. We reported thi sto our VM expert but even he couldnt figure out the problem ..

[/ QUOTE ]

I've often wondered if it is worth doing package compression any more. Back in the days of 10MBPs Ethernet it made sense. Now, I am not as sure the time saved during deployment is worth the time spent during the compression, especially if VM'ing Deployment servers is getting more common.

[/ QUOTE ]

I agree on "no compression" expecially when you are running on the later releases where most users are Web-only. The compression really only helps the speed of Deploying "Fat/Development" clients so there are very few of those being used when most users using the Web clients.

No compression also eliminates the need for recompressing old full packages.
 
I'm going to tend to agree here. No point doing compression - it really doesn't save any space (it actually takes UP more space) - and it doesn't help with deployments either. All of our deployments are happening in the server room anyway these days...I can't even think when a deployment happens outside of a server room - even "fat clients" are virtualized these days...

So, stop the compression.

I'm a little worried about the 8 hour compile for the virtual machine stats that Joel put up though. Seems outrageously long.
 
[ QUOTE ]
I'm going to tend to agree here. No point doing compression - it really doesn't save any space (it actually takes UP more space) - and it doesn't help with deployments either. All of our deployments are happening in the server room anyway these days...I can't even think when a deployment happens outside of a server room - even "fat clients" are virtualized these days...

So, stop the compression.

I'm a little worried about the 8 hour compile for the virtual machine stats that Joel put up though. Seems outrageously long.

[/ QUOTE ]

I will put in an enhancement request with Denver to have them consider changing the default from Compression to No Compression.
 
Back
Top Bottom