hyperthreading

andy_smith

Well Known Member
List,

Does anyone use hyperthreading on Intel enterprise server ? (or any JDE
server ?)

Are there any experiences with it as yet - i.e. is it still better to have 4
X CPU as oppose to 2 CPU running hyperthreading ?

Ta


Andy Smith
Technical Consultant

WHITEHOUSE
Consultants

http://www.whitehouse-consult.co.uk

Office : 01159 825987
Mobile: 07949 603770

E-Mail: [email protected]





Andy Smith
Whitehouse Consultants
Win2K SQLServer7 Xe
 
We have a few Intel machines with the new 3GHz HT processors, and I have had the opportunity to play around with the configuration a little bit.

I guess the answer is really "it depends." It depends on the CPU speed and the application you are running. Also, depending on your load I would say four CPU's is better than two with Hyper-Threading. Here's why; the performance you get from two CPU's with HT enabled is not equal to four CPU's running the same load, all things considered equal. With JDE batch processes, they don't multi-thread across processors. Let's say you have 40 concurrent batch jobs running on your quad CPU box. Each batch process is going to be assigned to a particular CPU by the OS. You might see 10 jobs per processor, you might see 15 on two and 5 each on the other two. It really just depends on what the CPU's were doing at the jobs were assigned. If you have 3GHz processors, performance is going to be smoking. This is especially true if your database server is able to keep up with the enterprise servers requests. If have a dual processor box with HT extensions, and you have the same 40 concurrent batch jobs, performance will almost certainly be be lower than on the 4 CPU box - in most cases. You may see a 25-50% performance improvement over a similar dual processor box without HT, but that is in ideal circumstances.

If anyone has any concrete benchmarks on this, I would be interested in taking a look at them.
 
Andy,

Enterprise One (all versions) is not coded to take advantage of HT technologies. However, you may see some advantages on the SQL Server side, as this is able to take advantage of this.
 
Turn it off.

Hyperthreading actually degrades performance - its well documented in the PC tuner community. For the Database, for OneWorld - in fact for everything server-based I know of, it is better off turning it off completely.

Use it for playing games...!

I'll try and find more literature on this for later on.
 
"Hyperthreading actually degrades performance"

Given the rule of thumb of two concurrently processing UBE's per NT CPU would you leave hyperthreading on to enable more UBE queues, even if each process ran slower?

I have been turning this over and over and cannot reach a firm conclusion. It would be nice to have more threads (UBE, not CPU) and run more jobs concurrently but not sure if the tradeoff in UBE performance is worth it. I am solely concerned about UBE performance because I feel that the other ES processes are a wash when it comes to hyperthreading- it neither helps nor hurts. The number of concurrent UBE 's seems interesting though.

Thoughts?
 
One could make the argument that multi-processor setups degrade performance as well. The kernel must work harder than a uniprocessor kernel to manage the smp transactions. That said, I agree with you, Jon. There is a lot of evidence out there to support your statement that Hyper-Threading is nothing more than hyper-marketing. Intel is moving their manufacturing process to dual core processors for the Itanic platform. Maybe that will trickle down to the low-end server and desktop market???
 
Itanic - LOL - I love that (Itanium/Titanic) !

I'm a huge Opteron fan myself, I think that AMD is going to knock the pants off intel in the 64bit x86 marketplace personally.

As for the rule of the "one ube per cpu" - bosch - or is that borsche ? Something cold and squidgy anyway.

It really depends on what the UBE is doing. There are many UBE's that are I/O bound, rather than CPU bound - and having umpteen queues running in parallel will certainly help with the throughput - especially if the I/O bound process is dealing with writes (such as GL Post or MRP). There is really very few processes in OneWorld that is specifically "CPU bound" - ie will use 100% of the CPU resources when run (we all WISH that this was possible, since speed would be purely limited by hardware) - saying that, however, there are THREADS in a UBE process that rely heavily on CPU speed.

In my experience - Hyperthreading never helps. Turn it off. A dual processor machine will work better than two single processor machines because of the ability to "shelve" some of the OS tasks. Hyperthreading doesn't do this since its software based. However, beyond dual processors, its hard to justify cost versus performance i.e. comparing a quad processor machine against two dual processor machines.

Buy the 2nd fastest processor available. ie - right now, 3.4Ghz Xeon processors are available - but they're twice as expensive as 3.2Ghz Processors - and you'll not see twice the performance.

Anyway - as far as Hyperthreading is concerned - try it out ! Run some reports (make sure that you include some nice big ones such as GL Post or Sales Repost) under both modes - its relatively simple to switch hyperthreading on and off.
 
From the stats I have been reading on Opteron, it is slower than Itanium in single and dual configs. However Opteron scales much better than Itanium and actually meets and beats Itanium in 4 proc configurations 8 proc configurations.

For the price, scalability and ability to run 32bit code, Opteron rocks!!!
 
"There is really very few processes in OneWorld that is specifically "CPU bound" - ie will use 100% of the CPU resources when run"

Ever mess with setting CPU affinity for a process (and it's children)? I've used Process Explorer from sysinternals.com to do this but have never run a full test and recorded results.
 
Considering I'm a recent transplant to Austin, I suppose I have to root for the home team and say I like Opteron better. Scaling isn't so important to us after you get to a certain amount of processors. I think dynamically allocating processors based on workload is more suited to our line of business. Sure, it would be nice to be able to double the CPU's and expect double the performance, but there are few architectures out there that can accomplish this. IBM and Sun are both betting some chips on AMD, while HP has been in bed and co-developed the Itanic to replace the aging PA-RISC platform. The company I work for is mostly an HP shop, but they have a little bit of everything. Even have a couple of AS/400's but not running JDE if you can believe that. I would like to see PeopleSoft port their code to the Itanium (I'll be nice for once) so we can consider running all of our Enterprise servers on one big Superdome. The newer machines can run Windows, Linux and HP-UX on the same box. I know you can do similar things with the bigger IBM pSeries and iSeries machines. Anyone running OneWorld on anything larger than a 16 processor mid-range? We are on a consolidation kick and I'd like to get my hands on some numbers. Off to Google I go.
 
I can't speak of JDE with Hyperthreading as I am on an AS400 enterprise server. HOWEVER, we have two report servers that make SQL requests to the AS400 in order to caclulated data and format reports (then direct back to HTML users). I tested this extensively back before there was much literature on hyperthreading.

As a test, I used a report that is particlulary high in CPU utilization and ran 6 at one time. This pushed the CPU to 100% and took between 3 min and 3min + 15 seconds to process every time. With hyperthreading enabled, the same load ran 100% CPU and took about 2:45 seconds.

Please note...regular non 100% CPU loads DID NOT CHANGE. There was only a benefit when the CPU was FULLY pegged.

Anyway...one experience of many...

Ryan Hunt
 
Thankyou Ryan

But as you note, the code you ran is NOT OneWorld - which was NOT created specifically for Hyperthreading - correct ?
 
Hi Alex

This is great news ! Its pretty cool that someone at Peoplesoft is planning ahead like this....completely different company now I guess !

Well - hopefully by ERP 10 or so, we'll see full 64bit support !
 
Yes, you are absolutely correct. I should also add... Before I changed our production NT boxes to hyperthread, I called the reporting software vendor and checked my results with their research and they stated that they DID support hyper threading.

Thanks

Ryan
 
I agree completely. That's sort of a dirty little secret they have. For instance, their port to PA-RISC is actually backwards compatible with 32-bit processors. It runs just fine on 64-bit hardware but makes no use of it's memory addressing capabilities. It just doesn't make sense that they would want to support customers with hardware that only ran 32-bit, considering most of that hardware is out of support by the vendor and most recent builds of HP-UX won't run on it in the first place! Maybe the development box they have is some old pile of junk that requires the +DAportable compiler extensions. If anyone ever wondered what that was when looking at the server ini file, it's to enable 32-bit compatible code when building from a 64-bit processor, i.e. PA-RISC 2.0 or better. Kind of silly, don't you think? I mean, why on earth would I want to run a batch process that can access more than 4GB of RAM? haha
 
Back
Top