How to Create a Secure Gateway for Planning Analytics

There are a couple of elements that must be established to make the secure gateway work. There are server side and local side requirements. This post deals with what has to be done on the IBM server so you can connect it to your local data. For instructions on establishing the local side of the connection, please see this post.

Server Side Configuration

In the Welcome Kit from IBM, there is a heap of instructions about different URLs and credentials for logging into different parts of the solution. We are interested in logging into Planning Analytics Control. the URL will be something like https://servername.planning-analytics.ibmcloud.com/control/ and you log in using the credentials supplied to control TM1 services.

Create Planning Analytics Secure Gateway

Once there you will see a screen like this:

create secure gateway for planning analytics

Click on the funny looking padlock and then the big black plus sign. This will open a dialogue box where you can create the secure gateway. Here I have just named it InfocubeTest and the token has been created by the tool – I have greyed out the token ;).

Take note of the instructions and copy the code starting with “docker run…” and just paste it into a local text file for now. This must include the secure gateway token.

You then need to configure the local server to accept connections from the Planning Analytics server using the secure gateway token. This local configuration can either use Docker or the IBM Secure Gateway Client tool. Click here to go to the instructions for the local configuration.

After the connection has been configured on the local server you will come back here to add a database or ODBC connection.

How to Create Headers in AsciiOutput

I was creating a text file the other day using AsciiOutput and then realised that I needed to have a header line in there with column headers. The column headings are useful for then uploading into an external database with some confidence that each column is what you expect and also can form the identifiers in a subsequent TI to reload the data into TM1.

Method to Create Headers in AsciiOutput

So how’d I do it?  Pretty easy really.

On the Prolog tab I defined a variable, lets call it vHeader and made it equal to 1.

Then on the tab where you’re running the AsciiOutput, insert an IF statement and an additional AsciiOutput right before the AsciiOutput for the data. So something like this (note in this example I have defined variables for the path and filenames already):

If ( vHeader = 1);

AsciiOutput ( vPath | vFile, 'heading1', 'heading2', 'heading3'... 'headingN');

vHeader = vHeader + 1;

Endif;

AsciiOutput ( vPath | vFile, 'data1', 'data2', 'data3'... 'dataN');

DatasourceNameForServer and DatasourceNameForClient TM1 Functions: Use and Syntax

Ever wondered how you can vary the data source for a Turbo Integrator process inside the actual process so it becomes dynamic? Well here is a method.

Firstly, define a bunch of variables:

sCurrentMonth = CellGetS ( 'System Assumptions', 'No Year', 'No Month', 'Current Month', 'String' );
sFileDir = '\\servername\folder\';

sFileName = sCurrentMonth | ' ' | SUBST(sCurrentYear,3,2) | ' Employees.csv';
sFilePath = sFileDir | sFileName;

Then use DatasourceNameForServer and DatasourceNameForClient to force the TI to use specific files for both the Datasource Name for the server and for the client.

DatasourceNameForServer = sFilePath;
DatasourceNameForClient = sFilePath;

Why is this useful?  Well there are a couple of reasons:

  1. Where you have a file that is being automatically created and includes (in our case) the month and year in the filename, this is very useful.  It then allows us to keep historic versions of the source files in case we need to reload them.
  2. If you have a Dev and Prod environment, for example, you could have the server name as an entry in a System Assumptions cube and then retrieve it from System Assumptions and use it dynamically in a TI.

Using a Framework Manager Package as a Data Source

You can use data published through Cognos BI as a data source for TM1.  I had heaps of trouble trying to make this to work. It was very frustrating, but I think worthwhile.

So why bother I hear you ask first up. Well I reckon that in a combined BI & TM1 environment, having a data set that uses a common metadata layer for both BI and TM1 makes sense. Make a change in the FM package and it is reflected in all BI and TM1 uses of that data with a single change.  Yes, it adds a layer.  Adds another technology and might slow things down a little.  But if you are loading a cube in the middle of the night, who cares if it takes 10 minutes or 12?

To make it work though was a different story. First the TM1 Package Connector is not installed by default with Cognos Express 10.2. Dumb IBM. So you need to get that, then install it and point it at the same Cognos environment in cogconfig as the CX environment. So just open cogconfig for CX and copy the Gateway and Dispatcher settings from the Environment properties and paste them into the cogconfig for Cognos10. Leave everything else the same and save the Cognos 10 cogconfig. That should make it work.

Now open Turbo Integrator and select the Data Source tab. On here select IBM Cognos Package as the data source. Here is where it was really frustrating. Unless you have a Dimensionally Modelled Relational dataset in your Framework Manager package, then you can only work with “Custom Query”. This will then only address Cognos BI reports that you have created.

So go to BI and create simple list reports for all the dimensions and facts that you have prepared in FM. Then go back to TI and viola, they appear as Custom Queries.

Now just choose the columns you want, preview the data and you can proceed as normal with your TI.