Prod FULL package and best practice

johndanter

johndanter

Legendary Poster
Hi guys,

It's come to our attention that our last FULL prod package is seriously out of date. Aug 2012!!

We plan to build a new full package ASAP but CNC are saying we then need to deploy this package.

Can we just build the full package and not deploy it?

Surely the specs that make up the Prod package are from objects already in Prod anyway?

Am I missing something?

Thanks

John
 
AFAIK, you have to build the full package(it will take hours). Once the package is successfully built, you can deploy it.
 
Thanks Rauf, I understand the order :)

I'm, just questioning the necessity for the deployment. As in my mind you just deploy specs you already had to build the package anyway.

The specs are already running on their respective servers etc, so I am unsure why there is a need to then deploy them when they are already there.

I must be missing something, but I'm unsure what.

Maybe the deployment updates tables behind the scenes so E1 knows what's what. Maybe that's the reason. It's just not my area :)
 
John,

When you build a package, you essentially compile all of your objects for a path code into a "container", called a package. You are not running on this new package, however, until you deploy it.
One good reason to deploy it is that there may have been changes made to your system that weren't captured in an update package to your current full package. The new full package will include all these changes. Sometimes developers forget to let their CNC folks know about everything they've been doing!
That being said, it's always a good idea to keep your old package around after you've deployed your new full package -- just in case you need to go back to it for anything.

If you look at your JAS server's logs, you'll see that during JAS server startup, the system discovers the current full package and each update applied. This can get to be a very long list, and building a new full package resets this to just the 1 new full package.

In years past, having too many update packages seemed to be linked to corrupted packages, and building new full packages was a way to avoid and/or fix that. This was anecdotal history, though -- so take it for what it is.

All that being said, we try to build AND deploy a new full package every six months, with update packages about every 3 to 4 weeks -- but we're in a fairly stable environment without a lot of development going on.

Your organization may want to establish a build frequency policy, based upon your business needs. In my opinion, over two years is way too long between packages.
 
If your not going to deploy a package, I don't see a reason in building it.

By deploying the package you are ensuring that your environment is using the latest and greatest code, which is what you just built.
 
I personally deploy full package when there is baseline ESU.

i see risk deploy full package in your case wait until you get jde upgrade then copy all objects in DV apply ESUs build full package then testing. copy objects to PY and PD. you will be doing data refreshes etc during this whole process.


Build a package is just you build car but not driving.
 
We do full builds/deploys all the time. Doing a full build/deploy is way safer than NOT doing a full build/deploy. If any code or object changes you make changes a C code definition you should consider a full build. A full build is the only way to force a full recompile of all the C code that may use any changed definition.

So if you if you:

- Change a table
- Change a BSFN DSTR
- Change a Business View (that is if the view has had its struct generated and is used in a BSFN some place)
- Change a Proc Option template (you should really find if the proc PO struct exists in any .h or .c file(s) and regen/repaste/recompile)
- Change UBE interconnects if the UBE is called in a BSFN (again, regen interconnect struct...)
- Manually change structs and other definitions such as enums and precompile defines in a .h file of a C BSFN
- Change the signature of any non-static internal subroutines defined in the .h file of a C BSFN
- Proably more I am forgetting



Take this simple example. You have two NERs.

NER: N5900001
MyFuncConvertDateToString
-> jdDateIn
<- szStringOut


NER: N5900009
MyFuncDoSomeStuff

In MyFuncDoSomeStuff you call MyFuncConvertDateToString.


You put both NERs in an update package do a build, deploy, all is good.


Now you come back later and change N5900001 to add an optional parameter to specify how to format the date. If szOptionalFormatMaskIn is blank MyFuncConvertDateToString populates the parameter with a default mask (and returns it for informational purposes) value.
MyFuncConvertDateToString
-> jdDateIn
<- szStringOut
<> szOptionalFormatMaskIn


You put N5900001 in an update package, do a build, do a deploy... and start getting zombie kernels or unexpected results since you have effectively created a buffer overrun/over-read when ever MyFuncDoSomeStuff calls MyFuncConvertDateToString because MyFuncDoSomeStuff was not recompiled with the new struct definition.
 
Good example Brain.

John
it is not simple you if think that you can activate old full package in case problem occurs. Remember if you apply ESU its insert the values in tables and turning the old full package back will be challenging as you will have a hard time to look on tables and bring back old value.
e.g if you apply year end ESU which insert values in table e.g BI publisher template and if you have to bring the old package you need to rollback table value manually or may have other method. if you don't do that you will have unexpected results.

Ignore my above statements if you were fully focus on the development not the ESU related full package because you have more control what your developer doing etc.


Thanks
AD
 
Awesome replies guys, many thanks :)

We've decided go ahead with both the package build and deployment.
I was just questioning it all as we only have around 20 base modes (plus all these are very small in nature)
Since Aug 2012 however we have had some 200+ update packages and I haven't counted the ESUs yet

If you look at your JAS server's logs, you'll see that during JAS server startup, the system discovers the current full package and each update applied. This can get to be a very long list, and building a new full package resets this to just the 1 new full package.

This is the kind of reason I've been looking for as no one else here fully understands the geeky CNC side of things and I'm just a mere developer :)

Thanks again list
 
Last edited:
John

Yes, you need to deploy it. Building it just gets things staged, deploying it is what updates the specs and clears out all of the old java objects on your web server. As another poster advised, it is a good practice to deploy a full package every couple of months or so. More frequently if you do heavy development.

Gregg
 
Whoa, 200 updates? Your system is WAY overdue. If I get up to 20, I build and deploy a full.

Someone needs to clonk your Cnc upside the head for letting it go that long......
 
Hi guys

Another question :)

Applying ESUs, (non baseline and not that large) do you then need to build a full package?

I'm thinking no and you only need to do this if the ESU is a baseline ESU.
 
I would do a full build. Mainly because if ANY data structure definition (table, BSFN DS, PO, etc.) or other .h file definition changes you need to force a full recompile of any C code that might reference those definitions.

Honestly, this goes for any change. If I add a parameter to a BSFN DS or change a table struct I usually request a full build since that BSFN or table could be referenced by other BSFNs in a myriad places and you need to recompile all of those BSFNs.
 
Well John I would agree - Full only needed if a large ESU such as a baseline.
 
What if a "small" ESU consisted of three objects:

P4210
B4200311
D4200310F (F42FSEditLineDS)

The ESU change was to add a 30 character string field to the end of D4200310F that F4211FSEditLine will return some additional information into when called.

If you only do an update build you will be fighting off more zombies than The Walking Dead (or Shaun of the Dead if you prefer). This is because only B4200311 will be recompiled. ANY C code that calls F4211FSEditLine will be passing a D4200310F DS buffer that is too small and B4200311 will be trying to write into a larger memory buffer - the classic buffer overrun.

A quick search shows that on my current install D4200310F is referenced in at least 80+ different BSFNs. So is this a large or small ESU?
 
To me if your rule is only do a full build if it is a large or baseline ESU is like saying I will only stop at the stoplight during rush hour.

The problem, IMO, is that the package build process has a major gap. You can do an update which is quick and easy or you can do a FULL build, which, for just a few objects, feels like you are using a nuke to kill a fly. They need to add an option on the update build to do a full BUSBUILD (recompile). Hell, I would argue for safety that it shouldn't even be an option... EVERY build, update or full, should do a FULL BUSBUILD.
 
Heres the best practice - in my opinion, you should be building and deploying MOSTLY full packages in production after your implementation phase has finished - very few updates (one BSSV and maybe up to 4-5 updates before repeating the cycle). There are lots of historical reasons for this - but essentially it ensures you have a clean set of objects and that those objects are performing efficiently. If you touch a lot of objects such as installing a baseline ESU, then you definitely need a full package.
 
TL;DR
IMO here is my best practices (and the one we follow at my organization). The CNC group responsible for the builds can at any time choose to do a FULL build over an update but it is the responsibility of the developer to let CNC know when a full build is REQUIRED. In extremely simple terms a full build is absolutely REQUIRED when a .h file definition changes and that definition is referenced in one or more .h or .c file(s) and the corresponding objects are NOT listed in the objects for the build. And no, just finding the objects that have references to definitions in the .h file and including in the build may not be enough - I could give an example but just know that the dependencies may not be as simplistic as "one level deep".



Longer Version:
If you want a very safe rule of thumb - do a full build if any of the following are true:
- A table structure or EXISTING index changes (adding an index doesn't require a full)

- A Business Function (NER or C) data structure changes

- A Business view is changed AND that Business view has a header file (presence of a .h file indicates that it is used in a BSFN). In these cases I usually to a full text search of C code to see if the business view is used in C code

- You change a UBE interconnect DS AND that UBE is called from a BSFN

- Any other typedef, enum, precompiler macro defintion, etc. in a .h file changes

- If a processing option DS template is changed the developer should really look to see if it is used in C code and paste in the new typedef accordingly. Technically, this does NOT require a full build. The C api call to load prossing options has a size parameter so if C code is not recompiled with the newly sized PO struct it wont cause any errors. However, it will fill up the jde.log with annoying messages.


Doing a full in these cases may often be "overkill" - it may NOT have been required. An update in these cases may have been fine; 80% of the time an update works every time. Just like running a red light at 4am will be fine 99% of the time. As an aside; I don't understand the extreme aversion to full builds... it's not like you have to stand there and turn a crank - I think the computer does most of the heavy lifting.



A companion to this strategy is to have solid C coding practices for what you put in a C BSFN's .h file. If you adopt a strategy to only put "public" definitions in the .h file and put all "private" definitions in the .c file it makes it very easy to determine when a FULL build is required - if your changing something in the .h file it probably means its used by C code someplace else.


These are the general C BSFN coding practices we use to help us easily identify when a full build is needed:
- BSFN structs/typedefs, what I call the public interface to the public Functions, are to be placed in the .h file (this is standard JDE coding practice).

- Internal structs/typedefs (NOT the ones defined and generated in the JDE toolset like BSFN data structs - just generic internal structs) and other internal enums, precompile macro defs, etc. are to be placed at the top of the .c file NOT the .h file if they are ONLY used by that BSFN. If they are later needed by another BSFN it is very simple to move them to the .h file - in this way the become "public". Doing this, if for example, a struct in a .h file is changing it is more than likely used by another BSFN someplace and a full build is needed. If a struct at the top of a .c file changes you know that it is ONLY used by that C BSFN (no need for a full build).

- Internal sub routine prototypes are to be declared static. If a subroutine needs to be statically linked to and called by another BSFN in the same DLL, the static definition is removed. Additionally I also put these prototypes at the top of the .c file if they are declared static in keeping with the private/public theme (but this is optional - just a code organization thing). Doing this, if you are changing the sig on a static internal routine you know that only this object is effected. If the internal routine is NOT static you know that it is probably being used someplace else - i.e. you will need to track down the other places it is used and make changes accordingly - in either case you most likely will NOT need to do a full build because all affected objects will be included in the build.

- One and only one processing option DS struct/tyepedef should be generated and placed in a SINGLE .h file (if other BSFNs need this typedef they can include the .h file - why people go pasting multiple copies of PO structs all over the place is beyond me). Same thing goes for any other JDE toolset generated struct like UBE interconnects. Again, a changed PO struct doesn't require a full build but you can optionally do one to avoid annoying jde.log entries. By having only ONE definition the only object that needs to be in the build is the BSFN where the struct is defined - the full build will recompile all other BSFNs that reference it.

Following these general guidelines, I can quickly look at a BSFN's .h file and know what the public interfaces and definitions are - what is used by other C code someplace else. Changing the .h file defintions usually means a full build will be required. It also makes modifications simpler. I don't have to worry about the effects of changing a definition like an enum or a struct at the top of a .c file because I know it only effects the code in that one object.
 
Monthly full builds and updates in between. I like the sanity check that everything is talking properly.
 
Thanks again gents.

So what you're saying is it really depends on what's in the ESU and not so much on it's size?
Although I'm a bit surprised that if an ESU had an increased .h DSTR that the ESU doesn't then include all the other .hs that reference it. (I used to do a find in files in C++ and redo the typdef for them all and add to the same OMW)

But yes, I can see that if you did a full build, then all would be fine.

I'm all up for scheduled full builds or after a certain amount of updates applied. But I also agree with the sentiment that we are swatting a flay with a nuke if we do a full build every ESU.

A compromise could be a scheduled introduction of the ESUs which coincides with a full build anyway...? Problems solved?

....and Brian, Shaun of the Dead, obviously ;)
 
Back
Top