Sending BI publisher output to an ECM (or Transform Content Center)

BOster

Legendary Poster
Currently we use Bottomline Transform Foundation Server (TFS) to not only re-author the PDF but to also store it in Transform Content Center (TCC) with searchable, indexed data such as Customer Number, Invoice Number, etc.

As we transition these existing processes to BI Publisher to re-author the PDF content (instead of using TFS) we still wish to store the resulting BI produced PDF documents in TCC or any other document management (ECM) solution with the same structured data search capabilities.

What is the best way to do this?


tl;dr

Today we do this:
ube -> TFS -> TCC (user can lookup and view PDF by invoice number in TCC)

We want to effectively do this:
ube -> BI Pub -> TCC (user can lookup and view PDF by invoice number in TCC)

or this:
ube -> BI Pub -> [Insert random ECM] (user can lookup and view PDF by invoice number)
 

jdelisths

Reputable Poster
To add to my response, if you want to build this yourself: If TCC or your ECM accepts emails, you can email the report. Or, you can build a program that polls and transfers the output from the BIP repository to TCC/ECM.
 

BOster

Legendary Poster
To add to my response, if you want to build this yourself: If TCC or your ECM accepts emails, you can email the report. Or, you can build a program that polls and transfers the output from the BIP repository to TCC/ECM.

The missing component in these solutions, from what I can tell, would be the ability to store the document with structured search data like Customer Number or Invoice Number. As part of our current process, the TFS PDF re-authoring also extracts the key data items from the content of the PDF that is used when stored in TCC.
 

jdelisths

Reputable Poster
In the email approach, you can have BIP add this key information to the subject.

And BTW, LynX Output Manager does this as well: Archive the outputs (BIP or otherwise) and key them by structured data (BIP only). The archived outputs can be searched by key data (by users and admins).
 

BOster

Legendary Poster
Maybe you could hook up the key data from the XML output to the formatted BIP output. Create some kind of x-ref table.

Can you describe your idea in a little more detail?

One of the ideas we kicked around here was just a batch process (in JDE or out) that would simply read through the finished submitted jobs, figure out which ones were not archived, use the XML created by the UBE to figure out the index values and then store the PDF in TCC (or other ECM). But this solution would not be in real time - which for most things would probably not be an issue but it might be nice to have it happen as one single process chain.
 

BOster

Legendary Poster
In the email approach, you can have BIP add this key information to the subject.

And BTW, LynX Output Manager does this as well: Archive the outputs (BIP or otherwise) and key them by structured data (BIP only). The archived outputs can be searched by key data (by users and admins).

Good to know. I looked at your website and watched the demo video and I didn't see anything that alluded to this capability. When LynX archives can you specify TCC as the repository?
 

craig_welton

Legendary Poster
Yes, that was the idea, to parse the XML output and connect it to the final document (Haven't researched the details, prolly have to burst by document).

Side note: We ended up using a trigger on F986110 in one implementation to get real time processing. The solution "publishes" the legacy PDF output or BIP output to a simple doc management system. There is no meta data to search on. It's more like making specific JDE reports available in one location. It can email the output via the same process.

Side note 2: We did use OSA successfully for the same type of solution (emailing and publishing UBE output). But I believe OSA will not work with BIP output.

Craig
 

jdelisths

Reputable Poster
The demo is 3 years old and the web page has not been updated. The latest version (v3.1) includes archival/keying of BIP outputs. The archive/index keying functionality (that I mentioned above) is within the product not with an external ECM.

If you want to send the output to an external ECM like SharePoint (which I believe is was in the demo?), you can use an external command. The command can receive run time values (job#, report, version, key values etc.). We have clients using the command feature to do all kinds of extended processing: FTP PDFs, CSVs etc., splitting a large PDF into smaller ones and then emailing them based on some criteria etc. According to TCC's brochure:

"Legacy documents and content files can also be imported into Transform Content Center with associated index files, enabling even non-Bottomline generated documents to be stored as fully searchable items within the database"

So, this should be feasible.
 

shearerj

Reputable Poster
Here are a few approaches to pass index information about a BIP output to an electronic content management system:

1. Encode the metadata key into the filename (Example PurchaseOrder-5673-OP-12345.PDF).
2. Encode the metadata into a subject line of an emailed PDF file. The email is sent to a email enabled SharePoint library. SharePoint can then parse the subject line to get the index fields.
3. Place a barcode (1d or 2d) on the PDF and an have the ECM decode the barcode on ingestion.

In the case of many ECM's, you will back-fill the missing metadata by querying the source system. So if you know the company number, doc type and PO number, you can automatically back fill all of the other PO information including vendor, currency amounts, etc. Hope some of this helps.
 

BOster

Legendary Poster
#3 is a really interesting idea, I will have to keep that one in mind if we finally implement a true enterprise level ECM.
 
Top