Full package deployment best practices

Muhammad

Active Member
Hi, I have been deploying update packages from time to time when new objects are created or modified. I was asked our other CNC that the best practice is to build and deploy full package once a month. This is confusing to me as we should only be deploying the changes (not all objects from scratch every month). I appreciate your expert opinions.
Thanks
Muhammad
 
Muhammad,

It really depends on a number of factors, such as E1 release, how many objects are changing for update packages, etc. During an upgrade or implementation Full Packages might be necessary weekly. Post go-live and if little modification is happening, maybe every six months.

-Ethan
 
[ QUOTE ]
troll?

[/ QUOTE ]

Most likely.

I'll give him a few posts to thank the others who take the time to answer his question. If he doesn't thank them after two posts, it's square business.

Max
 
Thanks, we use 812 with TR 898.1 on Windows. The update packages contains only less than 5 objects.
 
[ QUOTE ]
Muhammad,

It really depends on a number of factors, such as E1 release, how many objects are changing for update packages, etc. During an upgrade or implementation Full Packages might be necessary weekly. Post go-live and if little modification is happening, maybe every six months.

-Ethan

[/ QUOTE ]

Those are all good factors.

However, any time you change a public data structure (table, bsfn ds, po template, etc.) you should do a full build for the simple fact that you need to force a recompile of all C code that may reference the modified DS if that C code (BSFN, NER, Table Trigger) is NOT in the list of objects in the build.
 
Thank you BOster, now this is making sense. I was trying to find a document on best practices for deploying packages, I would appreciate if someone know and point me to the url.
Muhammad
 
I'm not sure if there is one (Oracle document). In fact I had to argue my point for full build requirements to our CNC staff based on conflicting advice from Oracle support (Oracle support eventually conceded my point).

Here at work it is on the developer to include a "full build required" notification when promoting an OMW project and requesting a build. It really does take a developer to determine if a full build is required based on what modifications were done to what objects. I have a set of internal guidelines for my developers that trigger a full build requirement. Any other full builds are at the discretion of our CNC guys. If no OMW project have specifically requested a full build they may or may not do a full build vs an update package.
 
Muhammad
I guess you doing the right thing to apply update packages which usually dont have many objects.
(Ideally update packages is only required if users having any issues). I create full package only in the year end time to make system current.

Thanks
Mike
 
The requirement for a full package build was an older requirement for those versions that did not have the runtime objects stored in database format.

When the objects were stored in TAM files, the UPDATE package build process would take the objects in the package, "mark" them as not to be used, and append the TAM file with the new object. This not only meant that the file grew over time, but also created fragmentation - with workstations needing multiple read operations to start applications. On terminal servers with 30-40 update packages, the slowdown of larger objects (like P4210) was considerable. A full package ensured that all objects were stored in a new set of TAM files sequentially, ensuring performance.

This factor of performance meant that customer best practices were to build full packages frequently if updates were occurring to production.

However, with runtime objects now stored in a centralized database and then converted into web serialized objects, this issue has definitely disappeared. BUT, there is still justification to perform a full package build on a regular basis only to ensure that all objects are "clean". Data structure changes can sometimes cause issues with objects seemingly unrelated to the original changes - but unless you're implementing ESU's on a frequent basis into Production, its rare.

Here is my suggestion

1. For Development - updates as much as possible, with occasional full packages - but not pushed out to developers.

2. For Test pathcodes - updates regularly, with full packages maybe once or twice a month in extreme development/pre-production stages

3. For production - every 5th or 6th update create a full package.
 
I to echo this.

We do updates to production each Friday, except month end week. And a full Quarterly (OK, that adds up to 9, not 6 or 7, but close), unless we run into something that appears to require a full. I also do a full if I have table changes as I will need to delete the SQLPKG objects anyway, and that will cause me to drop services. Finally I do a full if the number of objects exceeds 100 for an update, While the search by OMW status helps, it still has a few bugs.

I do updates to PY/DV twice a week, and full packages Quarterly..

Tom Davidson
Sensient 8.12, IBM i(AS/400), 8.98.3.4, OAS
 
[ QUOTE ]
BUT, there is still justification to perform a full package build on a regular basis only to ensure that all objects are "clean". Data structure changes can sometimes cause issues with objects seemingly unrelated to the original changes - but unless you're implementing ESU's on a frequent basis into Production, its rare.

[/ QUOTE ]

I guess it depends on what kind of development you do, but for us its not "rare" and there is a definite reason to do a full build. Like I said if a public data structure changes that may be used in C code (NER, C BSFN, Table Trigger) and that object is NOT being modified and is not in the package then you need to do a full build to trigger a recompile of the C code for those objects. Keep in mind just including the object that uses the DS in the build may not be enough. You need to check out/in the object and make sure that the date on the C source files changes. During the build's compile process if the source file's date is earlier than the .obj file's date, it will not get recompiled and the old .obj file will be using during linking to build the .dll (a full build effectively forces a recompile of all compilation units so you don't have this problem). If you don't make sure that that all the C code gets recompiled with the newly sized DS type def, you risk buffer overruns, zombie kernels, unexepected behavior, etc.

You can search for a changed DS in C source code using a text editor with search in file capabilities to find dependant objects and include those objects in the update package but I have seen where things still get missed. There are also rare cases (often with type defs used in jdeCache) where a DS is used as a struct member, so now you would have to search for all objects that may be using the DS D5512345_CACHE_REC that includes the changed DS F551234 as one of its members.

Here is an abbreviated set of guidelines for my developers which seeks to some what remove human error from the equation.

1. If you change an existing tables struct, or existing key or index definition request a full build.

2. If you change a PO template and the PO DS is used in a C BSFN request a full build.

3. If you change a BSFN's function data struct (parameter list), or if you change any other public struct, or other public definition such as an enum, macro, #define, etc. that can be used by other BSFNs you should request a full build.

There is still some judgement here. If the developer is certain that a changed DS is not used in C code or the objects that have C code are in the OMW project then a full build is not required. However, there are some objects that I don't even bother to investigate. For example we have a tag table to F4211 and a tag cache struct along with BSFNs to populate the tag cache. Those are used all over the place in C code. When those data structs change its an automatic full build.

BTW, by "public" I am talking about things that go in a .h file. When we write our C code we put "private" defs for #defines, enums, static function prototypes, internal struct defs, etc. at the top of the .c file. If we need one or more of those defintions in other BSFNs we move them to the .h file and they become "public". In this way we know that if a .h file is changing, it is probably used by other C code and a full build is probably needed. If the defines that are being modified are at the top of the .c file the changes are localized to that single compilation unit and a full build is NOT required. This technique helps to make it more of a "rule" as opposed to a judgement call on whether or not a full build is needed.

I guess what I am trying to say with all this is you can't simply have a "policy" that says once a year we are going to do a full build or if we have over 100 objects we are going to do a full build. Some changes REQUIRE a full build.
 
Here is a real world example:

The following struct is defined in B4200310.h (SOE MBF)


/***** Acme UI11 Extension Cache *****/ /* ATI#536 */
typedef struct tagD4200310_ACME_CACHE_UI11EXT
{
MATH_NUMERIC mnJobnumberA; /* key */
MATH_NUMERIC mnLineNumWF; /* key - links to F42UI11.lnix */
F4101 dsF4101;
DSD5642050A dsAltQty;
F4211 dsPrevF4211;
BOOL bPrevF4211Retrieved;
BOOL bForceNewShipment;
} D4200310_ACME_CACHE_UI11EXT, *LPD4200310_ACME_CACHE_UI11EXT;


Lets say for the sake of argument that a developer decides to add a field to F4211 because he needs to flag a record for some process (not that we would modify a pristine table, just an example). He puts his UBE and changed F4211 in a project and an update package is done. Sales order entry just got jacked, because B4200310.c didn't get recompiled and will still be using the old size for D4200310_ACME_CACHE_UI11EXT.F4211.

Lets say he is a good developer and looks for any instance of F4211 and finds the DS above, so he puts B4200310 in the build as well. Sales Order entry still got jacked because the type def D4200310_ACME_CACHE_UI11EXT is also used in B4200311, B5642052, and B5642070 and when the .dll is relinked the old .obj files using the old size for D4200310_ACME_CACHE_UI11EXT.F4211 will be used.

If a full build would have been done B4200310, B4200311, B5642052 and B5642070 would have all gotten recompiled with the correctly sized DS for D4200310_ACME_CACHE_UI11EXT which contains the changed DS for F4211.
 
While I'm on my soap box.
smile.gif


Oracle could make this whole full vs. update a moot point if they would simply add an option to the update pacakge to force a full busbuild (compile). It might take an update pacakge an additional 45 min to an hour longer to run, but it is better than the 10+ hours of a full build. This way you could have an update package with as little as one object, but force all the C code to recompile using the changed .h file definitions.

This is effectively what I do on my local web dev client since I would run into the same issues as a batch server would when I change a data struct. After I change a datastruct that is used in multiple places, the first thing I do is run a full busbuild with the clear option (see attached). This effectively recompiles all the C code. Takes about 30 min. on my machine.
 

Attachments

  • 171508-busbuild.jpg
    171508-busbuild.jpg
    39 KB · Views: 91
I have got so many great responses. I love this forum.

In our case, our developers (most of the time) create new reports or change existing ones. Update packages makse sense.
 
Back
Top