Extremely poor performance after Unicode conversion

msouterblight1

VIP Member
Hello all,

I just performed the Unicode conversion to PY on our system, and we are now experiencing extremely poor performance. For instance, when I go into P4210 and try to pull up all entries from 11/04/05, it takes almost 2 minutes to pull up the data. If I snapshot to DV on this same system, the data comes up in about 4 seconds. Both DV and PY are on the same Ent. Server, just configured in multi-foundation. Before anyone jumps on me for converting PY, we have our own custom testing environments, we rarely use PY, we just wanted to use ti to get an idea of what it would take..

We are on Oracle9i/Sun Solaris8.
Thanks
 
Matthew,

Performance gain promises were just marketing. On disk-bound systems, Unicode would most definitely slow everything down.

On the other hand, can you check that all indexes are still there? Have you re-analysed the tables afterwards?
 
Alex,

Thanks for your reply. We recreated the indicies after the conversion was compketed, bt still were expereiencing the issue. I opened up a call with ORCL support and they suggested that we recreate our statistics as you had suggested.

Can you go into more detail as to why the Unicode conversion would cause performance to decrease. According to everything I've read, if the data is not stored in Unicode format, the software must perform a conversion on the fly for all reads and inserts to the DB. As always though, I'm sure you have a lot more insight into this than most.

Thanks for your help on this.
 
Hi,

In a few words : performance is not only about CPU.

On one hand, Unicode helps performance because JDE no
longer has to be converting back and forth from 8 to 16
bits characters.

It's also an elegant solution for all of those who have to
deal with many languages and code pages, specially for
non-Western writing (Chinese, Thai, Arabic, Greek, etc)

On the other hand, Unicode imposes a 40%-80% increase
on your database size. All of your character fields will
be converted from 1 byte per char to 2 bytes per char.

Note : Numbers and dates are not affected, and some DS
such as Central Objects are already stored as Unicode.

What are its side effects?

Whenever you have to modify or retrieve data from
disk it will to read or write more bytes (up to 2x if
the requested info is stored just on string fields).

This imposes an extra burden on your disk (larger
tables and indexes, more disk sectors to access,
more bytes to save every time you click Save or Modify),
heavier impact on RAM (tables become larger so it has
an impact on its memory caching) and extra load on your
network (more bytes coming back and forth between
your servers and clients).

Finally, you have to check your custom code (if any) and
any interfaces you may have with 3rd party apps. Do they
support Unicode? It depends...

I'm not saying "Unicode is bad", I just say : be careful
with the impact, and some thorough testing (as you're
doing in PY) is required before going live.

In a few words, I suspect that your CPU gain is being
counteraffected by the bottleneck on either your disks
or your network, and possibly RAM too.

Regards,
 
Thanks. Sebastian have already provided an excellent answer, listing all the other considerations.

I can also add, that Query Optimizers (all of them in any DB) are a very touchy lot and a smallest change can screw up their predictions, which would result in the DB using wrong indexes, or not use indexes at all. Hence the statistics, etc. should always be fresh.
 
Back
Top