Urgent - Minimum downtime for users - 8.0 to 8.12 data conversion

Shrikanthn

Well Known Member
Hi List

We are currently undertaking an upgrade from 8.0 to 8.12.

We have just come through a very tough data conversion exercise converting from E1 8.0 to 8.12 (unicode). We have 200GB of PD data, which took 15 hrs to export, another 15hrs to import into 8.12 TempDB, an additional 12 hrs to convert to Unicode and finally additional 12 hrs to import into 8.12 PD DB.

This is the procedure performed by our DBA in association with consultant's DBA.

1) Is this the correct procedure?

2) Assuming the procedure is correct, because of our DB size this procedure took 3-4 days. This indicates a massive downtime for the business. How can we minimize system downtime? What can we do differently?

3) The largest table we have is F0911 which has > 40mn records (5 yrs of data). Other large tables are F4211, F42199, F4111 and the other usual suspects. One suggestion is to purge some of these files to hold only 2 yrs data and move the balance to some "offline" files. Then can we split the data conversion process into two schedules. One weekend we do conversion of "Live" files leaving the "offline" files to another weekend. In this way we can speed up the data conversion process.

I could be way off the mark coz i am not a DBA. this post is to get some different possibilities to think about and to encourage my team to consider other alternatives.

Any suggestions are most welcome....
We are on Wintel Oracle 9i (blue stack) moving to Wintel Oracle 10G, 8.12 TR 8.97 (red stack)

Best regards
Shrikanth
 
I've gone through a few of these exercises, where the final go-live conversion just won't fit in a weekend. The 2 best choices that I've worked with are:

1) After 1 full conversion, after validating all the numbers, keep the converted tables off to the side, and just delete the last year or two. Only convert the time period which you know will have fresh transactions between the last validated conversion and go-live. Then merge the 2 after conversion. This is essentially your item #3.

2) Split the conversion process among multiple PCs. You'll be limited by the network throughput and the number of disks in your array, but the processing power can be split up. You can have 1 PC working on the F0911, 1 PC working on F43xx, etc.

For your situation, you will probably find the first approach works the best.
 
Thanks a lot Ken for your prompt response. Just so i understand you, the steps of option one are as follows

1. Convert the whole 8.0 DB completely to 8.12

2. Delete latest 2 yrs data. Do i need to do this table by table manually? Do i need to do this for all 3720 tables?

3. During Go-live weekend, extract only last 2 yrs data from all tables and convert them to 8.12. What table names can i import into 8.12 as since the tables with >2yrs ago data already exist?

4. Merge the two tables. Merging meaning doing an append to all the tables individually?

Hope i understood you correctly.

Best regards
Shrikanth
 
Hi Shrikanth,

I'm a bit off-topic here and hope you don't mind me asking but can i ask why you wouldn't move from 8.0 to 9.0 (8.13
wink.gif
) release? It might be an obvious reasons but just wanna know...that's all.

thanks.
 
Hi Shrikanthn :

I lived a similar situation...

The first step was to setup a purge and archiving
mechanism, to separate live data from old one, and run
them on separate week-ends.

There were a couple of tables with >30,000,000 records
where we decided to reverse-engineer the table conversion
processes (they're UBEs, you can checkout them on DV
and analyze the code) and replicate the conversion
process via DB2/400 stored procedures.

The same can be done on any other db as Oracle or MSSQL.

The advantage is that native database scripts run from
3x to 10x faster than JDE TCs, the disadvantage being
the time it takes to reverse-engineer a TC, test and
write the corresponding db script.

This is kind of JDE "neurosurgery", so it has to be
performed very carefully!
 
Thanks Sebastian for that confirmation. After reading your post i am more assured that the process of offlining old data and attempting an export without them during the first go-live is doable.

i am not sure we have the skill sets or the time now to reverse engineer the TC UBE at this stage of the project. but by doing the offlining we should be able to reduce the time significantly we hope....

So as it stands the plan is as follows

1. Offline big tables using standard JDE purge utilities

2. Run Oracle DB export utility exp.exe excluding offline tables (hope to reduce time here)

3. Import into 8.12 tempDB

4. Run standard JDE TC (Because we are running std UBE cant reduce much time here)

5. Use a new Oracle 10G capability called "Data Pump" to import Data into 8.12 Production DB. (hope to reduce major time here)

With this we target 1 day data conversion. Any comments or ideas welcome.

Will post how we do for time...
 
Hi Shrikanthn,

I don't know if you can run the whole data conversion in
1 day, but purging tables will certainly reduce downtime
window.
Please, be aware that such a major upgrade is not only
about converting data, you'll also have to deal with
building packages, setting security, printers and
finally web issues.
 
Shrikanthn,

I second Sebation's recommendations. There is a lot more to the upgrade than just data conversion. As for that step, run through the procedure several times with different scenarios before your final go-live. I doubt that you will be able to get the down time down to a day, so your business will have to live with the concept of JDE being down for a bit.

- Gregg
 
Hi Sebastian & Greg

Thanks for reminding me about the rest of the stuff. We have that adequately covered (i think). The reason i am so fixated on the DB side of things is that that is what is sucking out most of the time in our go-live week end. hence i needed a strategy to crash that constraint.

As far as package building is concerned we plan to build and deploy a full package on the week-end one week before go-live so that we dont have to do this during go-live weekend.

Hence during go-live weekend we plan to do only DB conversion for live tables and expect the system to work without needing to do any additional package related stuff. We shall try this out a few times (hopefully) before go-live and see if it actually works that way....

your opinions and views are welcome as always
 
Hold on here............

You only hav 200 GB of data so this is not a lot to convert. It's actually quite small.

15 hours to export 200 GB is ridiculous. Are you running this on a i386 with IDE harddrives?

I'm doing almost the exact same conversion process right now and the export is 2 hours and import is 6 hours (with indices) for 250 GB.

In order to close the gap here you need to throw more hardware at the problem. Yes you can reverse engineer UBE's and off load data or convert only the last 2 years but again we're only taking 200 GB of data and hardware will prove to be much cheaper that the other efforts.

Are you running Oracle Standard or Enterprise? Parallel Datapump is only available on Enterprise and is much faster - just set a parallel degree equal to the number of cores and increase the number of threads. (my times above are on Standard without Parallel Datapump!).

Are you on a SAN? How many spindles do you have for allocated for each Oracle component?

Do you have a separate Database Server (not used for logic serving?)

Are you running Oracle 64 bit or 32 bit?

WHY ARE YOU CONVERSING TO UNICODE? Unless you need the multilingual support save yourself 12 hours here.

One thing I don't understand is why are you doing 2 imports? Save yourself another 12 hours here.

Overall the entire process you're doing shouldn't take more than 30 - 40 hours and should easily be done on a weekend without doing any special conversion techniques!
 
The conversion process is time consuming especially when your going to unicode you are doing the equivalent of a double conversion. The actual table conversion isn't doing anything fancy. If you trace the sql you will see what it is doing. Here is a small subset sql for the unicode conversion

INSERT INTO TESTDTA.F01161(WLAN8, WLIDLN, WLEFTB, WLATYPE, WLADD1, WLADD2, WLADD3, WLADD4, WLADDZ, WLCTY1, WLCOUN, WLADDS, WLCTR, WLEFTF, WLUSER, WLPID, WLUPMJ, WLJOBN, WLUPMT, WLMAINA, WLSYNCS, WLCAAD) SELECT WLAN8, WLIDLN, WLEFTB, TO_NCHAR(WLATYPE), TO_NCHAR(WLADD1), TO_NCHAR(WLADD2), TO_NCHAR(WLADD3), TO_NCHAR(WLADD4), TO_NCHAR(WLADDZ), TO_NCHAR(WLCTY1), TO_NCHAR(WLCOUN), TO_NCHAR(WLADDS), TO_NCHAR(WLCTR), TO_NCHAR(WLEFTF), TO_NCHAR(WLUSER), TO_NCHAR(WLPID), WLUPMJ, TO_NCHAR(WLJOBN), WLUPMT, TO_NCHAR(WLMAINA), WLSYNCS, WLCAAD FROM TESTDTA.F01161_NONUNI

TO_NCHAR is key function in this table process

So if you wanted you could capture these statements for your larger tables and run them on multiple database servers to separate i/o load them exp and imp to your production db or custom tailor this process to your environment. Your main bottle neck during this process is going to be disk i/o unless you have some massive disk array. Basic tuning should also help your oracle DB. You should have your DB files preallocated. Logging turned off as recommended and a properly sized sga/pga.
We did the conversion on 8.97.1.2 tools which was flaky. Getting the TC process to run fully without crashing and having to restart it was not possible. Hopefully this has been fixed. Luckily were not a 24x7 operation and had some flexibly in this. Best advice I can give you is practice, practice, practice this process.
 
Hi Colin,

Yes, you're right... 15 hours is too much time to convert
just 200 Gb.
Maybe Shrikanthn's DBA should check server disk layout
and RAM and make sure that the DB is properly tuned.
 
Not that I'm suggesting converting to unicode but there is a much easier way to do this during an upgrade.

If you're non-unicode, simply change the flag on the datasource to unicode and then do the upgrade. Any TC will then run AND convert the data to unicode at the same time. This avoids having to do the same process twice.

This will cover off many of the larger tables.

You then only need to convert the remaining non-unicode tables to unicode.


Colin
 
Your big problem here is a planning one - you didn't upgrade the database first to 10g/11g. Performing this step separately and in advance of your upgrade would have saved you a lot of time here.

From 10G on you have the option to use datapump which is export/import on steroids . . . exports and imports run 10x or more times faster . . . why don't you use datapump on the tempdb import?
 
[ QUOTE ]

Yes, you're right... 15 hours is too much time to convert
just 200 Gb.
Maybe Shrikanthn's DBA should check server disk layout
and RAM and make sure that the DB is properly tuned.

[/ QUOTE ]

I agree. 200Gb shouldn't take 15 hours to export - perhaps Shrikanthn can provide more information on the system architecture ? or the method of "exporting" ?! If you're performing exports through Oracle - could you not do these in parallel - ie, while the F0911 is running, have other tables being exported at the same time ? It is usually possible to dramatically speed up this processes by performing parallel exports/imports.

Then, once you have your 200Gb exported/imported in just a couple of hours - you can then look at the table conversions themselves, and try and find a way to run as many of those in parallel as well.

A good CNC person should be able to provide an architecture that will convert any size data within a companies timescale. Whether any additional hardware expense is justifiable, is another matter !
 
Maybe they finished retrofitting + testing the objects already - this is where 90% of the upgrade effort goes! Can't afford to do it, again for 9.0, especially as long as it didn't have the time to settle.
I would be more interested in why didn't they upgrade to Tools 8.98 - this should have been way easier to accomplish.
 
hi list

thanks a lot for your valuable suggestions. It made us change our approach and now we are directly transferring data from 8.0 (oracle 9i) to 8.12 (oracle 10G) by changing the format for the fields from for example varchar2 to nvarchar2 etc. It took us 5hrs to export from 9i & 10 hrs to import into 10G. Table conversion as predicted by most of you is what is killing most of our time. it seems to have taken 20 hrs. We opened a case with Oracle and they ofcourse do not recommend any non-std ways of doing this. they however did suggest the following 2 actions
1. Run TC on Enterprise Server and not on deployment server (which was what we were doing)
2. Pointed us to Knowledge document 653543.1
[For fetching the records from F4111, the BSFN uses three keys - Item No., B/P and Lot number. Currently there is no index available in F4111 with these fields.
So the UBE can be ran faster by creating a new index in clients database for the F4111, with keys - Item No., B/P and Lot number, and delete it once the work is done.]

We shall try this and hopefully reduce our TC time.

With regards to the other questions people were curious to know
1. We do need Unicode coz we have Chinese & Korean character requirements
2. We did not upgrade to TR 8.98 since when we started the upgrade process TR 8.97 was just released and we went with what was cutting edge at that time.
3. Did not consider upgrading DB from 9i to 10G in existing JDE 8.0 coz did not want to have the hassle of testing 8.0 with 10G....

if we had to do this over again i would seriously consider upgrading DB to 10G first (is it MTR for JDE 8.0?) and then attempt the data transfer using its advanced capabilities.

While I was clutching at straws, a colleague of mine suggested to check whether we have multi-threading enabled in our servers, which could result in the resources not being used optimally if the processes we are running are designed for single threading. Is this an avenue you think is worth exploring?

best regards
shrikanth
 
Back
Top