Quantcast
Channel: Integration Cloud Service – ATeam Chronicles
Viewing all 74 articles
Browse latest View live

Configuring HTTPS between Integration Cloud Service and Java Cloud Service

$
0
0

In a previous post, I discussed some general topics relating to the usage of HTTPS and certificates within Oracle Public Cloud. In this follow up piece, I will work through a concrete example and explain how to set up a Java Cloud Service instance in such a way that Integration Cloud Service can consume a service deployed to that platform over HTTPS.

The use case we have in mind here is a simple one. A custom REST-based service is deployed to WebLogic on JCS (I’ll use a simple Servlet that returns a JSON payload). An integration defined in Integration Cloud Service uses the REST adaptor to invoke that service over HTTPS. Since JCS is an example of a compute-based PaaS service, it is provisioned by default without an external hostname and with a self-signed certificate mapped to the Load Balancer IP Address. This is different to the ICS instance, which is accessible via an *.oraclecloud.com hostname with an automatically-trusted certificate. The first thing we will do is configure JCS to use a hostname that we provide, rather than the IP address. We’ll then look at how to provision a certificate for that instance and then finally, how to configure ICS.

I’ve used a JCS instance based on WebLogic 12.1.3 and Oracle Traffic Director 11.1.1.9 for this post. Exact steps may differ somewhat for other versions of the service.

Configuring JCS with your own hostname

I’ve deployed my simple Servlet to WebLogic via the console and for now, the only option available to me is to access it via the IP address of the JCS Load Balancer. We can see from the screenshots below that my web browser first prompts me to accept the self-signed certificate before accessing the end point, which is not what we want to happen:

AccessByIP

I’ve added a DNS entry mapping that IP (140.86.13.181) to an A record within my domain as below:

DNSEntry

And I also add this hostname (jcs-lb.securityateam.org.uk) in the OTD console on my JCS instance:

AddHostnameToOTD

I can now access the service with the hostname, but the certificate issue remains:

AccessByHostname

Configuring JCS with a certificate

We need to configure our JCS instance with a certificate that matches our hostname. There are two options:

1. Buy a certificate from a 3rd-party Certificate Authority

This option is preferable for production as the configuration of clients is far simpler. There is, generally, a cost associated, though. I’ve opted to use a trial certificate and have performed the following steps:

The first step (which is the same for either option) is to generate a Certificate Signing Request. When we do this, OTD generates a keypair and includes the public key in the request, which is to be sent to a CA for signing. Note how we use the hostname of our server as the common name (CN) in the request.

CertRequest OTD-CSR

I copy the CSR and paste in to the CA website and obtain my certificate, which is emailed to me once issued.

FreeTrialCert

Along with the server certificate itself, I receive a number of root and intermediate CA certificates, which I install into OTD as CA Certificates before importing my new server certificate.

InstallCACert

I deployed the configuration and restarted OTD (just to be safe), before copying the Base64-encoded server certificate I was sent and importing that into OTD.

InstallServerCert

The last step here is to modify my HTTPS listener in OTD to use my new certificate, as below. Once that is done, I can successfully connect to the server over SSL using my hostname.

AddCertToLisn GoodConnection

2. Obtain a certificate from your own (self-signed) Certificate Authority

Many organisations that use TLS certificates widely for internal communication security will have their own in-house Certificate Authority. These organisations have weighed up the costs and benefits and decided that it makes more sense to sign all of their server certificates in house – and to deal with the pain of configuring clients to manually trust this CA – than it does to buy a certificate for each server.

Most of the configuration steps from a JCS/OTD perspective are the same. I am going to use my colleague Chris Johnson’s simple yet awesome Simple CA script to create a self-signed CA certificate. I create a certificate signing request from within the OTD console as before, and then use an openssl command like the below to create the certificate based on my CSR.

openssl x509 -req -in server-cert.csr  -out server-cert.cer -CA ca.crt -CAkey ca.key -CAcreateserial -CAserial ca.serial -days 365 -sha256

That command uses the CSR I created (which I saved as “server-cert.csr”) and generates a certificate, signed by the CA certificate (“ca.crt”) created by Chris’s script. The output is in “server-cert.cer” and I can validate the contents as below:

ValidateSelfSigned

Now I repeat the steps above; first importing the self-signed CA certificate into OTD as a trusted certificate, then importing my server certificate and finally updating my listener to use the new certificate.

SelfSigned

One important change, though, is that I can no longer hit the REST endpoint directly with my browser, since, once again, the “Unknown Issuer” exception prevents my browser from establishing a secure connection. Because the CA cert that signed my server certificate is not trusted by the browser, I need to manually import this certificate into the browser trust store before I can access the URL.

FFImportCA

Connecting to JCS from ICS

Within our Integration Cloud Service console, we’re going to create a new Connection to our REST end-point on JCS. The steps that we need to follow will depend on which of the two options above we’ve gone with. Let’s do the simpler one first.

1.Connecting when JCS is using a certificate from a 3rd-party CA

ICS ships with a set of pre-configured trusted CA certificates, as you can see here:

ICSTrustedCA

As long as the SSL certificate that you have installed in your JCS instance has been signed by one of the pre-configured trusted CA’s in this list, then you don’t need to do anything more in order to configure the HTTPS connection using the ICS REST Adapter.

ICS-Success

2.Connecting when JCS is using a certificate from a self-signed CA

I’ve now changed my OTD listener back to the certificate signed by the self-signed CA. Here’s what happens when I test the connection in JCS:

ICS-Fail

The error message is a rather familiar one, especially to those who are used to configuring Java environments to connect to un-trusted certificates:

Unable to test connection "JCSREST_ROTTOTEST". [Cause: CASDK-0003]: 
 -  CASDK-0003: Unable to parse the resource, https://jcs-lb.securityateam.org.uk/simple/users. Verify that URL is reachable, can be parsed and credentials if required are accurate
  -  sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
   -  PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    -  unable to find valid certification path to requested target

This is, in fact, exactly the same exception I get when using a simple Java test client to connect to that end-point:

Java-Fail

Fortunately, the fix is quite simple. All I need to do is to manually import the self-signed CA cert into ICS as a trusted issuer and I can then successfully connect to the REST endpoint.

ICSImportSS
ICSImportedSS

Once I perform the above step, I am able to successfully connect from ICS to JCS once more.


Oracle HCM Cloud – Bulk Integration Automation Using SOA Cloud Service

$
0
0

Introduction

Oracle Human Capital Management (HCM) Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the batch integration to load and extract data to and from the HCM cloud. HCM provides the following bulk integration interfaces and tools:

HCM Data Loader (HDL)

HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion HCM. It supports important business objects belonging to key Oracle Fusion HCM products, including Oracle Fusion Global Human Resources, Compensation, Absence Management, Performance Management, Profile Management, Global Payroll, Talent and Workforce Management. For detailed information on HDL, please refer to this.

HCM Extracts

HCM Extract is an outbound integration tool that lets you select HCM data elements, extracting them from the HCM database and archiving these data elements as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

Oracle Fusion HCM provides the above tools with comprehensive user interfaces for initiating data uploads, monitoring upload progress, and reviewing errors, with real-time information provided for both the import and load stages of upload processing. Fusion HCM provides tools, but it requires additional orchestration such as generating FBL or HDL file, uploading these files to WebCenter Content and initiating FBL or HDL web services. This post describes how to design and automate these steps leveraging Oracle Service Oriented Architecture (SOA) Cloud Service deployed on Oracle’s cloud Platform As a Service (PaaS) infrastructure.  For more information on SOA Cloud Service, please refer to this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based components to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure. For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer to this.

These bulk integration interfaces and patterns are not applicable to Oracle Taleo.

Main Article

 

HCM Inbound Flow (HDL)

Oracle WebCenter Content (WCC) acts as the staging repository for files to be loaded and processed by HDL. WCC is part of the Fusion HCM infrastructure.

The loading process for FBL and HDL consists of the following steps:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke the “LoaderIntegrationService” or the “HCMDataLoader” to initiate the loading process.

However, the above steps assume the existence of an HDL file and do not provide a mechanism to generate an HDL file of the respective objects. In this post we will use the sample use case where we get the data file from customer, using it to transform the data and generate an HDL file, and then initiate the loading process.

The following diagram illustrates the typical orchestration of the end-to-end HDL process using SOA cloud service:

 

hcm_inbound_v1

HCM Outbound Flow (Extract)

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS)
  • Report is stored in WCC under the hcm/dataloader/export account.

 

However, the report must then be delivered to its destination depending on the use cases. The following diagram illustrates the typical end-to-end orchestration after the Extract report is generated:

hcm_outbound_v1

 

For HCM bulk integration introduction including security, roles and privileges, please refer to my blog Fusion HCM Cloud – Bulk Integration Automation using Managed File Trasnfer (MFT) and Node.js. For introduction to WebCenter Content Integration services using SOA, please refer to my blog Fusion HCM Cloud Bulk Automation.

 

Sample Use Case

Assume that a customer receives benefits data from their partner in a file with CSV (comma separated value) format periodically. This data must be converted into HDL format for the “ElementEntry” object and initiate the loading process in Fusion HCM cloud.

This is a sample source data:

E138_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,23,Reason,Corrected all entry value,Date,2013-01-10
E139_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,33,Reason,Corrected one entry value,Date,2013-01-11

This is the HDL format of ElementryEntry object that needs to be generated based on above sample file:

METADATA|ElementEntry|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|EntryType|CreatorType
MERGE|ElementEntry|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|E|H
MERGE|ElementEntry|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|E|H
METADATA|ElementEntryValue|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|InputValueName|ScreenEntryValue
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Amount|23
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected all entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-10
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Amount|33
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected one entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-11

SOA Cloud Service Design and Implementation

A canonical schema pattern has been implemented to design end-to-end inbound bulk integration process – from the source data file to generating HDL file and initiating the loading process in HCM cloud. The XML schema of HDL object “ElementEntry” is created. The source data is mapped to this HDL schema and SOA activities will generate the HDL file.

Having a canonical pattern automates the generation of HDL file and it becomes a reusable asset for various interfaces. The developer or business user only needs to focus on mapping the source data to this canonical schema. All other activities such as generating the HDL file, compressing and encrypting the file, uploading the file to WebCenter Content and invoking web services needs to be developed once and then once these activities are developed they also become reusable assets.

Please refer to Wikipedia for the definition of Canonical Schema Pattern

These are the following design considerations:

1. Convert source data file from delimited format to XML

2. Generate Canonical Schema of ElementEntry HDL Object

3. Transform source XML data to HDL canonical schema

4. Generate and compress HDL file

5. Upload a file to WebCenter Content and invoke HDL web service

 

Please refer to SOA Cloud Service Develop and Deploy for introduction and creating SOA applications.

SOA Composite Design

This is a composite based on above implementation principles:

hdl_composite

Convert Source Data to XML

“GetEntryData” in the above composite is a File Adapter service. It is configured to use native format builder to convert CSV data to XML format. For more information on File Adapter, refer to this. For more information on Native Format Builder, refer to this.

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select delimited format type and use source data as a sample to generate a XML schema. Please see the following diagrams:

FileAdapterConfig

nxsd1

nxsd2_v1 nxsd3_v1 nxsd4_v1 nxsd5_v1 nxsd6_v1 nxsd7_v1

Generate XML Schema of ElementEntry HDL Object

A similar approach is used to generate ElementEntry schema. It has two main objects: ElementEntry and ElementEntryValue.

ElementEntry Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”Entry” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntry” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EntryType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”CreatorType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

ElementEntryValue Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryValueHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryValueHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”EntryValue” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”InputValueName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ScreenEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

In Native Format Builder, change “|” separator to “,” in the sample file and change it to “|” for each element in the generated schema.

Transform Source XML Data to HDL Canonical Schema

Since we are using canonical schema, all we need to do is map the source data appropriately and Native Format Builder will convert each object into HDL output file. The transformation could be complex depending on the source data format and organization of data values. In our sample use case, each row has one ElementEntry object and 3 ElementEntryValue sub-objects respectively.

The following provides the organization of the data elements in a single row of the source:

Entry_Desc_v1

The main ElementEntry entries are mapped to each respective row, but ElementEntryValue entries attributes are located at the end of each row. In this sample it results 3 entries. This can be achieved easily by splitting and transforming each row with different mappings as follows:

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “1” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “2” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “3” from above diagram

 

Metadata Attribute

The most common use cases are to use “merge” action for creating and updating objects. In this use case, it is hard coded to “merge”, but the action could be set up to be dynamic if source data row has this information. The “delete” action removes the entire record and must not be used with “merge” instruction of the same record as HDL cannot guarantee in which order the instructions will be processed. It is highly recommended to correct the data rather than to delete and recreate it using the “delete” action. The deleted data cannot be recovered.

 

This is the sample schema developed in JDeveloper to split each row into 3 rows for ElementEntryValue object:

<xsl:template match=”/”>
<tns:Root-Element>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C9″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C10″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C11″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C12″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C13″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C14″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
</tns:Root-Element>
</xsl:template>

BPEL Design – “ElementEntryPro…”

This is a BPEL component where all the major orchestration activities are defined. In this sample, all the activities after transformation are reusable and can be moved to a separate composite. A separate composite may be developed only for transformation and data enrichment that in the end invokes the reusable composite to complete the loading process.

 

hdl_bpel_v2

 

 

SOA Cloud Service Instance Flows

The following diagram shows an instance flow:

ElementEntry Composite Instance

instance1

BPEL Instance Flow

audit_1

Receive Input Activity – receives delimited data to XML format through Native Format Builder using File Adapter

audit_2

Transformation to Canonical ElementEntry data

Canonical_entry

Transformation to Canonical ElementEntryValue data

Canonical_entryvalue

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using SOA Cloud Service. It shows how to convert customer’s data to HDL format followed by initiating the loading process. This process can also be replicated to other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

Cloud Security: Seamless Federated SSO for PaaS and Fusion-based SaaS

$
0
0

Introduction

Oracle Fusion-based SaaS Cloud environments can be extended in many ways. While customization is the standard activity to setup a SaaS environment for your business needs, chances are that you want to extend your SaaS for more sophisticated use cases.

In general this is not a problem and Oracle Cloud offers a great number of possible PaaS components for this. However, user and login experience can be a challenge. Luckily, many Oracle Cloud PaaS offerings use a shared identity management environment to make the integration easier.

This article describes how the integration between Fusion-based SaaS and PaaS works in general and how easy the configuration can be done.

Background

At the moment, Oracle Fusion-based SaaS comes with its own identity management stack. This stack can be shared between Fusion-based SaaS offerings like Global Human Capital Management, Sales Cloud, Financials Cloud, etc.

On the other hand, many Oracle PaaS offerings use a shared identity management (SIM-protected PaaS) and can share it if they are located in the same data center and identity domain. If done right, integration of SIM-protected PaaS and Fusion-based SaaS for Federated SSO can be done quite easily.

Identity Domain vs Identity Management Stack

In Oracle Cloud environments the term identity is used for two different parts and can be quite confusing.

  • Identity Domain – Oracle Cloud environments are part of an Identity Domain that governs service administration, for example, start and restart of instances, user management, etc. The user management always applies to the service administration UI but may not apply to the managed environments.
  • Identity Management Stack – Fusion-based SaaS has its own Identity Management Stack (or IDM Stack) and is also part of an Identity Domain (for managing the service).

Federated Single Sign-On

As described in Cloud Security: Federated SSO for Fusion-based SaaS, Federated Single Sign-on is the major user authentication solution for Cloud components.

Among its advantages are a single source for user management, single location of authentication data and a chance for better data security compared to multiple and distinct silo-ed solutions.

Components

In general, we have two component groups we want to integrate:

  • Fusion-based SaaS Components – HCM Cloud, Sales Cloud, ERP Cloud, CRM Cloud, etc.
  • SIM-protected PaaS Components – Developer Cloud Service, Integration Cloud Service, Messaging Cloud Service, Process Cloud Service, etc.

Each component group should share the Identity Domain. For seamless integration both groups should be in the same Identity Domain.

Integration Scenarios

The integration between both component groups follows two patterns. The first pattern shows the integration of both component groups in general. The second pattern is an extension of the first, but allows the usage of a third-party Identity Provider solution. The inner workings for both patterns are the same.

Federated Single Sign-On

This scenario can be seen as a “standalone” or self-contained scenario. All users are maintained in the Fusion-based IDM stack and synchronized with the shared identity management stack. The SIM stack acts as the Federated SSO Service Provider and the Fusion IDM stack acts as the Identity Provider. Login of all users and for all components is handled by the Fusion IDM stack.

SaaS-SIM-1

Federated Single Sign-On with Third Party Identity Provider

If an existing third-party Identity Provider should be used, the above scenario can be extended as depicted below. The Fusion IDM stack will act as a Federation Proxy and redirect all authentication requests to the third-party Identity Provider.

SaaS-SIM-IdP-2

User and Role Synchronization

User and Role synchronization is the most challenging part of Federated SSO in the Cloud. Although a manageable part, it can be really challenging if the number of identity silos is too high. The lower the number of identity silos the better.

User and Role Synchronization between Fusion-based SaaS and SIM-protected PaaS is expected to be available in the near future.

Requirements and Setup

To get the seamless Federated SSO integration between SIM-protected PaaS and Fusion-based SaaS these requirements have to be fulfilled:

  • All Fusion-based SaaS offerings should be in the same Identity Domain and environment (i.e., sharing the same IDM stack)
  • All SIM-based PaaS offerings should be in the same Identity Domain and data center
  • Fusion-based SaaS and SIM-based PaaS should be in the same Identity Domain and data center

After all, these are just a few manageable requirements which must be mentioned during the ordering process. Once this is done, the integration between Fusion-based SaaS and SIM-protected PaaS will be done automatically.

Integration of a third-party Identity Provider is still an on-request, Service Request based task (see Cloud Security: Federated SSO for Fusion-based SaaS). When requesting this integration adding Federation SSO Proxy setup explicitly to the request is strongly recommended!

Note: The seamless Federated SSO integration is a packaged deal and comes with a WebService level integration setting up the Identity Provider as the trusted SAML issuer, too. You can’t get the one without the other.

References

ICS Connectivity Agent Advanced Configuration

$
0
0

Oracle’s Integration Cloud Service (ICS) provides a feature that helps with the integration challenge of cloud to ground integrations with resources behind a firewall. This feature is called the ICS Connectivity Agent (additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration). The design of the Connectivity Agent is to provide a safe, simple, and quick setup for ICS to on-premise resources. In many cases this installation and configuration is an almost no-brainer activity. However, there are edge cases and network configurations that make this experience a bit more challenging.

We have encountered the following post-installation challenges with the ICS 16.3.5 Connectivity Agent:

1. Networks containing proxy server with SSL and/or Man In The Middle (MITM) proxy
2. On-premise resources requiring SSL
3. nonProxyHost required for on-premise resources
4. White list OMCS and upgrade URIs

It’s important to note that future releases of ICS may improve on these configuration challenges. However, some are not related to the product (e.g., network white list) and appropriate actions will need to be coordinated with the on-premise teams (e.g., network administrators).

Import Certificates

One of the more challenging activities with post-configuration of the ICS Connectivity Agent is updating the keystore with Certificates that the agent needs to trust. Since the agent is a lightweight, single server, WebLogic installation, there are no web consoles available to help with the certificate import. However, if you investigate this topic on the internet you will eventually end up with details on using java keytool and WebLogic WLST to accomplish this task. Instead of doing all this research, I am including a set of scripts (bash and WLST) that can be used to expedite the process. The scripts are comprised of 4 files where each file contains a header that provides details on how the script works and its role in the process. Once downloaded, please review these headers to make yourself familiar with what is required and how they work together.

The following is a step-by-step example on using these scripts:

1. Download the scripts archive on the machine where the Connectivity Agent is running
Scripts: importToAgent.tar
2. Extract the scripts archive into a directory. For example:
[oracle@icsagent importToAgent]$ tar xvf importToAgent.tar.gz
createKeystore.sh
importToAgentEnv.sh
importToAgent.sh
importToAgent.py
3. Update the importToAgentEnv.sh to reflect your agent environment
4. Create a subdirectory that will be used to hold all the certificates that will need to be imported to the agent keystore:
[oracle@icsagent importToAgent]$ mkdir certificates
5. Download or copy all certificates in the chain to the directory created in the previous step:
[oracle@icsagent importToAgent]$ ls -l certificates/
total 12
-rwxr-x---. 1 oracle oinstall 1900 Nov 1 14:55 intermediate-SymantecClass3SecureServerCA-G4.crt
-rwxr-x---. 1 oracle oinstall 1810 Nov 1 14:55 main-us.oracle.com.crt
-rwxr-x---. 1 oracle oinstall 1760 Nov 1 14:55 root-VeriSignClass3PublicPrimaryCertificationAuthority-G5.crt
NOTE: You can use your browser to export the certificates if you do not have them available elsewhere. Simply put the secured URL in the browser and then access the certificates from the “lock”:

AdvancedAgentConfig-002

6. Execute the createKeystore.sh:
[oracle@icsagent importToAgent]$ bash createKeystore.sh -cd=./certificates -cp=*.crt
Certificates will be added to ./certificates/agentcerts.jks
Adding certificate intermediate-SymantecClass3SecureServerCA-G4.crt
Certificate was added to keystore

Adding certificate main-us.oracle.com.crt
Certificate was added to keystore

Adding certificate root-VeriSignClass3PublicPrimaryCertificationAuthority-G5.crt
Certificate already exists in system-wide CA keystore under alias
Do you still want to add it to your own keystore? [no]: yes
Certificate was added to keystore

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 3 entries

main-us, Nov 1, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): 9D:61:69:38:4C:54:AC:44:5C:22:90:E1:8F:80:8F:85:43:9E:8D:7C
intermediate-symantecclass3secureserverca-g4, Nov 1, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): FF:67:36:7C:5C:D4:DE:4A:E1:8B:CC:E1:D7:0F:DA:BD:7C:86:61:35
root-verisignclass3publicprimarycertificationauthority-g5, Nov 1, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): 4E:B6:D5:78:49:9B:1C:CF:5F:58:1E:AD:56:BE:3D:9B:67:44:A5:E5

Keystore ready for connectivity agent import: ./certificates/agentcerts.jks

NOTE: This script has created a file called importToAgent.ini that contains details that will be used by the importToAgent.py WLST script. Here’s an example of what it looks like:

[oracle@icsagent importToAgent]$ cat importToAgent.ini
[ImportKeyStore]
appStripe: system
keystoreName: trust
keyAliases: intermediate-SymantecClass3SecureServerCA-G4,main-us,root-VeriSignClass3PublicPrimaryCertificationAuthority-G5
keyPasswords: changeit,changeit,changeit
keystorePassword: changeit
keystorePermission: true
keystoreType: JKS
keystoreFile: ./certificates/agentcerts.jks
7. Make sure your agent server is running and execute the importToAgent.sh:
[oracle@icsagent importToAgent]$ bash importToAgent.sh -au=weblogic -ap=welcome1 -ah=localhost -aport=7001

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Using the following for the importKeyStore:
WebLogic URI = t3://localhost:7001
WebLogic User = weblogic
WebLogic Password = welcome1
appStripe = system
keystoreName = trust
keyAliases = intermediate-SymantecClass3SecureServerCA-G4,main-us,root-VeriSignClass3PublicPrimaryCertificationAuthority-G5
keyPasswords = changeit,changeit,changeit
keystorePassword = changeit
keystorePermission = true
keystoreType = JKS
keystoreFile = ./certificates/agentcerts.jks

Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "agent-domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root.
For more help, use help('serverRuntime')

Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root.
For more help, use help('domainRuntime')

Keystore imported. Check the logs if any entry was skipped.

At this point you will have imported the certificates into the keystore of the running Connectivity Agent. I always bounce the agent server to make sure it starts cleanly and everything is picked up fresh.

Update http.nonProxyHost

If your network contains a proxy server, you will want to make sure that any on-premise resource the agent will be connecting to is on the http.nonProxyHosts list.  This way the agent knows to not use the proxy when trying to connect to an on-premise endpoint:

AdvancedAgentConfig-003

To update this Java option, open the $AGENT_DOMAIN/bin/setDomainEnv.sh and search for nonProxyHosts. Then add the appropriate host names to the list. For example:

Before

export JAVA_PROPERTIES=”${JAVA_PROPERTIES} -Dhttp.nonProxyHosts=localhost|127.0.0.1 -Dweblogic.security.SSL.ignoreHostnameVerification=true -Djavax.net.ssl.trustStoreType=kss -Djavax.net.ssl.trustStore=kss://system/trust”

After

export JAVA_PROPERTIES=”${JAVA_PROPERTIES} -Dhttp.nonProxyHosts=localhost|127.0.0.1|*.oracle.com -Dweblogic.security.SSL.ignoreHostnameVerification=true -Djavax.net.ssl.trustStoreType=kss -Djavax.net.ssl.trustStore=kss://system/trust”

Once this update has been done, you will need to restart your agent server for the update to be picked up.

Add Agent URIs to Network White List

The Connectivity Agent contains two URIs that it will reach out to. The primary one is Oracle Message Cloud Service (OMCS), which is how ICS communicates to the on-premise agent. The other one is for things like agent upgrades. These two URIs must be added to the network white list or the agent will not be able to receive requests from ICS. The URIs are located in the following Connectivity Agent file:

$AGENT_DOMAIN/agent/config/CpiAgent.properties

The contents of this file will look something like the following (with the URIs circled):

AdvancedAgentConfig-001

Summary

Please follow the official on-line documentation for the ICS Connectivity Agent install. If you run into things like handshake errors when the agent starts or attempts to connect to an on-premise resource, the aforementioned will be a good starting point to resolve the issue. This blog most likely does not cover all edge cases, so if you encounter something outside of what is covered here I would like to hear about it.

Integrating Commerce Cloud using ICS and WebHooks

$
0
0

Introduction:

Oracle Commerce Cloud is a SaaS application and is a part of the comprehensive CX suite of applications. It is the most extensible, cloud-based ecommerce platform offering retailers the flexibility and agility needed to get to market faster and deliver desired user experiences across any device.

Oracle’s iPaaS solution is the most comprehensive cloud based integration platform in the market today.  Integration Cloud Service (ICS) gives customers an elevated user experience that makescomplex integration simple to implement.

Commerce Cloud provides various webhooks for integration with other products. A webhook sends a JSON notification to URLs you specify each time an event occurs. External systems can implement the Oracle Commerce Cloud Service API to process the results of a webhook callback request. For example, you can configure the Order Submit webhook to send a notification to your order management system every time a shopper successfully submits an order.

In this article, we will explore how ICS can be used for such integrations. We will use the Abandoned Cart Web Hook which is triggered when a customer leaves the shopping cart idle for a specified period of time. We will use ICS to subscribe to this Web Hook.

ICS provides pre-defined adapters, easy to use visual mechanism for transforming and mapping data, fan out mechanism to send data to multiple end points. It also provides and ability to orchestrate and encrich the payload.

Main Article:

For the purpose of this example, we will create a task in Oracle Sales Cloud (OSC), when the Idle Cart Web Hook is triggered.

The high level steps for creating this integration are:

  1. Register an application in Commerce Cloud
  2. Create a connection to Commerce Cloud in ICS
  3. Create a connection to Sales Cloud in ICS
  4. Create an integration using the 2 newly created connections
  5. Activate the integration and register its endpoint with Abandoned Cart Web Hook

Now let us go over each of these steps in detail

 

Register an application in Commerce Cloud

Login to Admin UI of commerce cloud. Click on Settings

01_CCDashBoard

 

 

 

Click on Web APIs

02_CCSettings

 

 

 

 

 

 

 

 

 

 

 

Click on Registered Applications

03_CCWebAPIs

 

 

 

 

 

 

 

Click on Register Application

04_CCWebAPIsRegisteredApps

 

 

 

 

 

Provide a name for the application and click Save

05_CCNewApp

 

 

 

 

 

A new application is registered and a unique application id and key is created. Click on Click to reveal to view the application key

06_CCNewAppKey1

 

 

 

 

 

Copy the application key that is revealed. This will later be provided while configuring connection to Commerce Cloud in ICS

07_CCNewAppKey2

 

 

 

 

 

You can see the new application is displayed in the list of Registered Applications

08_CCWebAPIsRegisteredApps2

 

 

 

 

 

Create a connection to Commerce Cloud in ICS

From the ICS Dashboard, click Connections to get to the connections section

01_ICSDashboard

 

 

 

 

 

Click Create New Connection

02_Connections

 

 

 

 

 

 

Create Connection – Select Adapter page is displayed. This page lists all the available adapters

03_ICSCreateNewConn

 

 

 

 

 

 

 

 

 

Search for Oracle Commerce Cloud and click Select

04_ICSNewConnCC

 

 

 

 

 

 

 

 

 

Provide a connection name and click Create

05_ICSNewConnCCName

 

 

 

 

 

 

ICS displays the message that connection was created successfully. Click Configure Connectivity

06_ICSNewConnCCCreated

 

 

 

 

 

Provide the Connection base URL. It is of the format https://<site_hostname>:443/ccadmin/v1. Click OK

07_ICSNewConnCCURL

 

 

 

Click Configure Security

08_ICSNewConnCCConfigureSecurity

 

 

 

 

Provide the Security Token. This is the value we copied after registering the application in Commerce Cloud. Click OK

09_ICSNewConnCCOAuthCreds

 

 

 

 

The final step is to test the connection. Click Test

10_ICSNewConnCCTest

 

 

ICS displays the message, if connection test is successful. Click Save

11_ICSNewConnCCTestResult

 

 

 

Create a connection to Sales Cloud in ICS

For details about this step and optionally how to use Sales Cloud Events with ICS, review this article

Create an integration using the 2 newly created connections

From the ICS Dashboard, click Integrations to get to the integrations area

01_Home

 

 

 

 

 

 

 

Click Create New Integration

02_CreateIntegration

 

 

Under Basic Map My Data, click Select

03_Pattern

 

 

 

 

 

Provide a name for the integration and click Create

04_Name

 

 

 

 

 

 

Drag the newly create Commerce Cloud connection from the right, to the trigger area on the left

05_SourceConn

 

 

 

 

 

Provide a name for the endpoint and click Next

06_EP1

 

 

 

 

 

 

Here you can chose various business objects that are exposed by the Commerce Cloud adapter. For the purpose of this integration, chose idleCart and click Next

07_IdleCartEvent

 

 

 

 

 

 

Review the endpoint summary page and click Done

08_EP1ConfigSummary

 

 

 

 

 

 

 

Similarly, drag and drop a Sales Cloud connection to the Invoke

09_TargetConn

 

 

 

 

 

Provide a name for the endpoint and click Next

10_EP2Name

 

 

 

 

 

Chose ActivityService and createActivity operation and click Next

11_CreateActivity

 

 

 

 

 

 

 

 

 

Review the summary and click Done

12_EP2Summary

 

 

 

 

 

Click the icon to create a map and click the “+” icon

This opens the mapping editor. You can create the mapping as desired. For the purpose of this article, a very simple mapping was created:

ActivityFunctionCode was assigned a fixed value of TASK. Subject was mapped to orderId from idleCart event.

22_ICSCreateIntegration

 

 

 

 

 

 

Add tracking fields to the integration and save the integration

25_ICSCreateIntegration

 

 

 

 

 

 

Activate the integration and register its endpoint with Abandoned Cart Web Hook

In the main integrations page, against the newly created integration, click Activate

26_ICSCreateIntegration

 

 

 

 

Optionally, check the box to enable tracing and click Yes

27_ICSCreateIntegration

 

 

 

 

ICS displays the message that the activation was successful. You can see the status as Active.

28_ICSCreateIntegration

 

 

 

Click the information icon for the newly activated integration. This displays the endpoint URL for this integration. Copy the URL. Remove the “/metadata” at the end of the URL. This URL will be provided in the Web Hook configuration of Commerce Cloud.

29_ICSCreateIntegration

 

 

 

 

In the Commerce Cloud admin UI, navigate to Settings -> Web APIs -> Webhook tab -> Event APIs -> Cart Idle – Production. Paste the URL and provide the ICS credentials for Basic Authorization

Webhook

 

 

 

 

 

 

By default, Abandoned cart event fires after 20 minutes. This and other settings can be modified. Navigate to Settings -> Extension Settings -> Abandoned Cart Settings. You can now configure the minutes until the webhook is fired. For testing, you can set it to a low value.

 

CCAbandonedCartSettings

 

 

 

 

 

 

 

 

This completes all the steps required for this integration. Now every time a customer adds items to a cart and leaves it idle for the specified time, this integration will create a task in OSC.

 

References / Further Reading:

Using Commerce Cloud Web Hooks

Using Event Handling Framework for Outbound Integration of Oracle Sales Cloud using Integration Cloud Service

Best Practices – Data movement between Oracle Storage Cloud Service and HDFS

$
0
0

Introduction

Oracle Storage Cloud Service should be the central place for persisting raw data produced from another PaaS services and also the entry point for data that is uploaded from the customer’s data center. Big Data Cloud Service ( BDCS ) supports data transfers between Oracle Storage Cloud Service and HDFS. Both Hadoop and Oracle provides various tools and Oracle engineered solutions for the data movement. This document outlines various tools and describes the best practices to improve data transfer usability between Oracle Storage Cloud Service and HDFS.

Main Article

Architectural Overview

 

new_oss_architecture

Interfaces to Oracle Storage Cloud Service

 

Interface

Resource

odcp

Accessing Oracle Storage Cloud Service Using Oracle Distributed Copy

Distcp

Accessing Oracle Storage Cloud Service Using Hadoop Distcp

Upload CLI

Accessing Oracle Storage Cloud Service Using the Upload CLI Tool

Hadoop fs -cp

Accessing Oracle Storage Cloud Service Using hadoop File System shell copy

Oracle Storage Cloud Software Appliance

Accessing Oracle Storage Cloud Service Using Oracle Storage Cloud Software Appliance

Application Programming platform

Java Library – Accessing Oracle Storage Cloud Service Using Java Library
File Transfer Manager API – Accessing Oracle Storage Cloud Service Using File Transfer Manager API
REST API – Accessing Oracle Storage Cloud Service Using REST API

 

Oracle Distributed Copy (odcp)

Oracle Distributed Copy (odcp) is a tool for copying very large data files in a distributed environment between HDFS and an Oracle Storage Cloud Service.

  • How does it work?

odcp tool has two main components.

(a) odcp launcher script

(b) conductor application

odcp launcher script is a bash script serving as a launcher for the spark application which provides a fully parallel transfer of files.

Conductor application is an Apache Spark application to copy large files between HDFS and an Oracle Storage Cloud Service.

For end users it is recommended to use the odcp launcher script. The odcp launcher script simplifies the usage of Conductor application by encapsulating environment variables setup for hadoop/Java, spark-submit parameters setup and invoking spark application etc. The conductor application is an ideal approach while performing data movement using spark application.

blog3

odcp takes the given input file (source) and splits it into smaller file chunks. Each input chunk is then transferred by one executor over the network to destination store.

basic-flow

When all chunks are successfully transferred, executors take output chunks and merge them into final output files.

flow

  • Examples

Oracle Storage Cloud Service is based on Swift, the open-source Open Stack Object Store. The data stored in Swift can be used as the direct input to a MapReducer job by simply using the “swift:// <URL>” to declare the source of the data. In a Swift File system URL, the hostname part of the URL identifies the container and the service to work with; the path identifies the name of the object.

Swift syntax:

Swift://<MyContainer.MyProvider>/<filename>

odcp launcher script

Copy file from HDFS to Oracle Storage Cloud Service

odcp hdfs:///user/oracle/data.raw swift://myContainer.myProvider/data.raw

Copy file from Oracle Storage Cloud Service to HDFS:

odcp swift://myContainer.myProvider/data.raw hdfs:///user/oracle/odcp-data.raw

Copy directory from HDFS to Oracle Storage Cloud Service:

odcp hdfs:///user/data/ swift://myContainer.myProvider/backup

In case the system has more than 3 nodes, transfer speed can be increased by specifying a higher number of executors. For 6 nodes, use the following command:

odcp –num-executors=6 hdfs:///user/oracle/data.raw swift://myContainer.myProvider/data.raw

 

Highlight of odcp launcher script Options
–executor-cores: This option is called number of executor cores. This specifies the number of thread counts which depends on vCPU. This allows scripts to run in parallel based on the thread count.  The default value is 30.
–num-executors: This option is called number of executors. This will be the same as the number of physical node/VMs. The default value is 3.

 

Conductor application

Usage: Conductor [options] <source URI...> <destination URI>
<source URI...> <destination URI>
source/destination file(s) URI, examples:
hdfs://[HOST[:PORT]]/<path>
swift://<container>.<provider>/<path>
file:///<path>
-i <value> | --fsSwiftImpl <value>
swift file system configuration. Default taken from etc/hadoop/core-site.xml (fs.swift.impl)
-u <value> | --swiftUsername <value>
swift username. Default taken from etc/hadoop/core-site.xml fs.swift.service.<PROVIDER>.username)
-p <value> | --swiftPassword <value>
swift password. Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.password)
-i <value> | --swiftIdentityDomain <value>
swift password. Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.tenant)
-a <value> | --swiftAuthUrl <value>
swift auth URL. Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.auth.url)
-P <value> | --swiftPublic <value>
indicates if all URLs are public - yes/no (default yes). Default taken from etc/hadoop/core-site.xml (fs.swift.service.<PROVIDER>.public)
-r <value> | --swiftRegion <value>
swift Keystone region
-b <value> | --blockSize <value>
destination file block size (default 268435456 B), NOTE: remainder after division of partSize by blockSize must be equal to zero
-s <value> | --partSize <value>
destination file part size (default 1073741824 B), NOTE: remainder after division of partSize by blockSize must be equal to zero
-e <value> | --srcPattern <value>
copies file when their names match given regular expression pattern, NOTE: ignored when used with --groupBy
-g <value> | --groupBy <value>
concatenate files when their names match given regular expression pattern
-n <value> | --groupName <value>
group name (use only with --groupBy), NOTE: slashes are not allowed
--help
display this help and exit

 

One can submit a spark conductor application to a spark deployment environment for execution of spark applications. Below is an example of how to submit a spark conductor application.

spark-submit
–conf spark.yarn.executor.memoryOverhead=600
–jars hadoop-openstack-spoc-2.7.2.jar,scopt_2.10-3.4.0.jar
–class oracle.paas.bdcs.conductor.Conductor
–master yarn
–deploy-mode client
–executor-cores <number of executor core e.g. 5>
–executor-memory <memory size e.g. 40G>
–driver-memory < driver memory size e.g. 10G>
original-conductor-1.0-SNAPSHOT.jar
–swiftUsername <oracle username@oracle.com>
–swiftPassword <password>
–swiftIdentityDomain <storage ID assigned to this user>
–swiftAuthUrl https://<Storage cloud domain name e.g. storage.us2.oraclecloud.com:443>/auth/v2.0/tokens 
–swiftPublic true
–fsSwiftImpl org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
–blockSize <block size e.g. 536870912>
swift://<container.provider e.g. rstrejc.a424392>/someDirectory
swift:// <container.provider e.g. rstrejc.a424392>/someFile
hdfs:///user/oracle/

  • Limitations

odcp consumes a lot of resources of the cluster. While running other Spark/MapReduce jobs parallel to odcp, one needs to adjust the number of executors, the amount of memory available to the executors or the number of executor cores using the options –executor-cores,  –executor-memory and –num-executors parameter value for better performance.

 

Distcp

Distcp (distributed copy) is a Hadoop utility used for inter/intra-cluster copying of large amounts of data in parallel. The Distcp command submits a regular MapReducer job that performs a file-by-file copy.

  • How does it work?

Distcp involves two steps:

(a) Building the list of files to copy (known as the copy list)

(b) Running a MapReduce job to copy files, with the copy list as input

distcp

The MapReduce job that does the copying has only mappers—each mapper copies a subset of files in the copy list. By default, the copy list is a complete list of all files in the source directory parameters of Distcp.

 

  • Examples

 

Copying data from HDFS to Oracle Storage Cloud Service syntax:

hadoop distcp hdfs://<hadoop namenode>/<source filename> swift://<MyContainer.MyProvider>/<destination filename>

Allocation of JVM heap-size:   

export HADOOP_CLIENT_OPTS=”-Xms<start heap memory size> –Xmx<max heap memory size>”

Setting timeout syntax:

hadoop distcp – Dmapred.task.timeout=<time in milliseconds>  hdfs://<hadoop namenode>/<source filename> swift://<MyContainer.MyProvider>/<destination filename>

Hadoop getmerge syntax:

bin/hadoop fs -getmerge [nl] <source directory> <destination directory>/<output filename>

Hadoop getmerge command takes a source directory and a destination file as an input and concatenates source files into the destination local file. The parameter  –nl can be set to add a newline character at the end of each file.

 

  • Limitations

For a large file copy, one has to make sure that the task has a termination strategy in case the task doesn’t read an input, write an output, or update its status string. The option  -Dmapred.task.timeout=<time in milliseconds>  can be used to set the maximum timeout value. In case of 1TB file size use -Dmapred.task.timeout=60000000 (approximately 16 hours) with Distcp command.

Distcp might run out of memory while copying very large files. To get around this, consider changing the -Xmx JVM heap-size parameters before executing hadoop distcp command. This value must be multiple of 1024

In order to improve the transfer speed of very large file, one has to split the file at source and copy these split files to destination. Once the files are successfully transferred, at the destination end, Hadoop performs merge operation.

Upload CLI

 

  • How does it work?

The Upload CLI tool is a cross-platform Java-based command line tool that you can use to efficiently upload files to Oracle Storage Cloud Service. This tool optimized uploads through segmentation and parallelization to maximize network efficiency and reduce overall upload time. During the large file transfer process,  if the system gets interrupted, upload CLI tool maintain the state and resumes from the point where the file transfer get interrupted. This tool has an automatic retry option on failures.

  • Example:

Syntax of upload CLI:

java -jar uploadcli.jar -url REST_Endpoint_URL -user userName -container containerName file-or-files-or-directory

To upload a file named file.txt to a standard container myContainer in the domain myIdentityDomain as the user abc.xyz@oracle.com, run the following command:

java -jar uploadcli.jar -url https://foo.storage.oraclecloud.com/myIdentityDomain-myServiceName -user abc.xyz@oracle.com -container myContainer file.txt

When running the Upload CLI tool on a host that’s behind a proxy server, specify the host name and port of the proxy server by using the https.proxyHost and https.proxyPort Java parameters.

 

Syntax of upload CLI behind proxy server:

java -Dhttps.proxyHost=host -Dhttps.proxyPort=port -jar uploadcli.jar -url REST_Endpoint_URL -user userName -container containerName file-or-files-or-directory

  • Limitations

Upload CLI is a java tool and will only run on hosts which satisfy the prerequisites for uploadcli tool.

 

Hadoop fs -cp

 

  • How does it work?

Hadoop fs -cp is a family of Hadoop file system shell commands that can run from source operating system’s command line interface. Hadoop fs -cp is not distributed across cluster. This command transfer data byte by byte from the source machine where the command has been issued.

  • Example

hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2

 

  • Limitations

The byte by byte transfer takes a very long time to copy large file from HDFS to Oracle Storage Cloud Service.

 

Oracle Storage Cloud Software Appliance

 

  • How does it work?

Oracle Storage Cloud Software Appliance is a product that facilitates easy, secure, reliable data storage and retrieval from Oracle Storage Cloud Service. Businesses can use Oracle Cloud Storage without changing their data center applications and workflows. The applications which use standard file-based network protocol like NFS to store and retrieve data, can use Oracle Storage Cloud Software Appliance as a bridge between Oracle Storage Cloud Service which uses object storage and standard file storage. Oracle Storage Cloud Software Appliance caches frequently retrieved data on the local host, minimizing the number of REST API calls to Oracle Storage Cloud Service and enabling low-latency, high-throughput file I/O.

The application host instance can mount directory to the Oracle Storage Cloud Software Appliance that acts as a cloud storage gateway. This enables the application host instance to access Oracle Cloud Storage container as a standard NFS file system.

 

Architecture

blog2

 

  • Limitations

The appliance is ideal for backup and archive use cases that require the replication of infrequently accessed data to cloud containers. Read-only and read-dominated content repositories are ideal target. Once the Oracle Storage Cloud Service container is mapped to a filesystem in Oracle Storage Cloud Software Appliance, other data movement tools like REST API, odcp, distcp, java library can’t be used for the specific container. Doing so would cause the data in the appliance to become inconsistent with data in Oracle Storage Cloud Service.

 

Application Programming Platform

Oracle provides various java library APIs to access Oracle Storage Cloud Services. The following interfaces summarizes various APIs one can use programmatically to access Oracle storage cloud service.

Interface

Description

Java library

Accessing Oracle Storage Cloud Service Using Java Library

File Transfer Manager API

Accessing Oracle Storage Cloud Service Using File Transfer Manager API

REST API

Accessing Oracle Storage Cloud Service Using REST API


Java Library  

 

  • How does it work?

The Java library is useful for Java Applications which prefer to use Oracle Cloud Java API for Oracle Storage Cloud Service instead of tools provided by Oracle and Hadoop. The Java library wraps the RESTful web service API. Most of the major RESTful API features to Oracle Storage Cloud Service are available through the Java Library. The Java Library is available via separate Oracle Cloud Service Java SDK.

 

java library

  • Example

Sample Code snippet

package storageupload;
import oracle.cloud.storage.*;
import oracle.cloud.storage.model.*;
import oracle.cloud.storage.exception.*;
import java.io.*;
import java.util.*;
import java.net.*;
public class UploadingSegmentedObjects {
public static void main(String[] args) {
try {
CloudStorageConfig myConfig = new CloudStorageConfig();
myConfig.setServiceName(“Storage-usoracleXXXXX”)
.setUsername(“xxxxxxxxx@yyyyyyyyy.com”)
.setPassword(“xxxxxxxxxxxxxxxxx”.toCharArray())
.setServiceUrl(“https://xxxxxx.yyyy.oraclecloud.com&#8221;);
CloudStorage myConnection = CloudStorageFactory.getStorage(myConfig);
System.out.println(“\nConnected!!\n”);
if ( myConnection.listContainers().isEmpty() ){
myConnection.createContainer(“myContainer”);
}
FileInputStream fis = new FileInputStream(“C:\\temp\\hello.txt”);
myConnection.storeObject(“myContainer”, “C:\\temp\\hello.txt”, “text/plain”, fis);
fis = new FileInputStream(“C:\\temp\\hello.txt”);
myConnection.storeObject(“myContainer”, “C:\\temp\\hello1.txt”, “text/plain”, fis);
fis = new FileInputStream(“C:\\temp\\hello.txt”);
myConnection.storeObject(“myContainer”, “C:\\temp\\hello2.txt”, “text/plain”, fis);
List myList = myConnection.listObjects(“myContainer”, null);
Iterator it = myList.iterator();
while (it.hasNext()) {
System.out.println((it.next().getKey().toString()));
}
} catch (Exception e) {
e.printStackTrace();
}
}
}

 

  • Limitations

Java API cannot create Oracle storage Cloud Service archive container. Appropriate JRE version is required for the Java Library.

 

File Transfer Manager API

 

  • How does it Work?

The File Transfer Manager (FTM) API is a Java library that simplifies uploading to and downloading from Oracle Storage Cloud Service. The File Transfer Manager provides both synchronous and asynchronous APIs to transfer files. It provides a way to track the operations for asynchronous version. The Java Library is available via separate Oracle Cloud Service Java SDK.

 

  • Example

Uploading a Single File Sample Code snippet

FileTransferAuth auth = new FileTransferAuth
(
"email@oracle.com", // user name
"xxxxxx", // password
"yyyyyy", //  service name
"https://xxxxx.yyyyy.oraclecloud.com", // service URL
"xxxxxx" // identity domain
);
FileTransferManager manager = null;
try {
manager = FileTransferManager.getDefaultFileTransferManager(auth);
String containerName = "mycontainer";
String objectName = "foo.txt";
File file = new File("/tmp/foo.txt");
UploadConfig uploadConfig = new UploadConfig();
uploadConfig.setOverwrite(true);
uploadConfig.setStorageClass(CloudStorageClass.Standard);
System.out.println("Uploading file " + file.getName() + " to container " + containerName);
TransferResult uploadResult = manager.upload(uploadConfig, containerName, objectName, file);
System.out.println("Upload completed successfully.");
System.out.println("Upload result:" + uploadResult.toString());
} catch (ClientException ce) {
System.out.println("Upload failed. " + ce.getMessage());
} finally {
if (manager != null) {
manager.shutdown();
}
}

 

REST API

 

  • How does it work?

The REST API can be accessed from any application or programming platform that correctly and completely understands the Hypertext Transfer Protocol (HTTP). The REST API uses advanced facets of HTTP such as secure communication over HTTPS, HTTP headers, and specialized HTTP verbs (PUT, DELETE). cURL is one of the many applications that meet these requirement.

 

  • Example

cURL syntax:

curl -v -s -X PUT -H “X-Auth-Token: <Authorization Token ID>” https://Oracle Cloud Storage domain name>/v1/<storage ID associated to user account/<container name>”

 

Some Data Transfer Test results

The configuration used to measure performance and data transfer rates are as following:

Test environment configuration:

- BDCS 16.2.5
- Hadoop Swift driver 2.7.2
- US2 production data center
- 3 nodes cluster that is running in BDA
- Every node has 256GB memory/30 vCPU
- File size: 1TB (Terabyte)
- File contains all zeros

#

Interface

Source

Destination

Time

Comment

 1  odcp HDFS Oracle Storage Cloud Service 54 minutes Transfer rate :

2.47 GB/sec

1.11 TB/hour

2 hadoop Distcp Oracle Storage Cloud Service HDFS failed Not Enough memory (after 1h)
3 hadoop Distcp HDFS Oracle Storage Cloud Service Failed
4 hadoop Distcp HDFS Oracle Storage Cloud Service 3 hours Based on splitting 1TB files into 50 files with each file size of 10GB. Each 10GB file took 18 minutes (and with partition size 256MB)
5 Upload CLI HDFS Oracle Storage Cloud Service 5 hours  55 minutes Data was read from Big Data Cloud Service HDFS mounted using fuse_dfs
6 hadoop fs -cp HDFS Oracle Storage Cloud Service 11 hours 50 minutes 50 seconds Parallelism 1, Transfer rate: 250 Mb/sec

 

Summary

One can make following conclusions from the above analysis.

Data File size and Data transfer time are two main components on deciding the appropriate interface for data movement between HDFS and Oracle Storage Cloud Service.

There is no additional overhead of data manipulation and processing using odcp interface.

Publishing business events from Supply Chain Cloud’s Order Management through Integration Cloud Service

$
0
0

Introduction

In Supply Chain Cloud (SCM) Order Management, as a sales order’s state changes or it becomes ready for fulfillment, events could be generated for external systems. Integration Cloud Service offers Pub/Sub capabilities that could be used to reliably integrate SaaS applications. In this post, let’s take close look at these capabilities in order to capture Order Management events for fulfillment and other purposes. Instructions provided in this post are applicable to SCM Cloud R11 and ICS R16.4.1.

Main Article

SCM Cloud Order Management allows registering endpoints of external systems and assignment of these endpoints to various business events generated during order orchestration. For more information on business event features and order orchestration in general, refer to R11 document at this link. ICS is Oracle’s enterprise-grade iPaaS offering, with adapters for Oracle SaaS and other SaaS applications and native adapters that allow connectivity to all SaaS and on-premise applications. To learn more about ICS, refer to documentation at this link. Figure 1 provides an overview of the solution described in this post.

000

Figure 1 – Overview of the solution

Implementation of the solution requires the following high-level tasks.

  • Download WSDL for business events from SCM cloud.
  • Implement an ICS ‘Basic Publish to ICS’ integration with trigger defined using WSDL downloaded in previous step.
  • Optionally, Implement one or more ICS ‘Basic Subscribe to ICS’ integrations for external systems that desire event notification.
  • Configure SCM Cloud to generate events to the ‘Basic Publish to ICS’ endpoint.
  • Verify generation of Business Events.

For the solution to work, network connectivity between SCM Cloud and ICS and ICS to External systems, including any on-premise systems, must be enabled. ICS agents can easily enable connectivity to on-premise systems.

Downloading WSDL for business events

Order Management provides two WSDL definitions for integration with external systems, one for fulfillment systems and another for other external systems that wish to receive business events. One example for use of business events is generation of invoices by an ERP system, upon fulfillment of an order. For the solution described in this post, a Business Event Connector is implemented. To download WSDLs, follow these steps.

  • Log into SCM Cloud instance.
  • Navigate to ‘Setup and maintenance’ page, by clicking the drop-down next to username on top right of the page.
  • In the search box of ‘Setup and maintenance’ page, type in ‘Manage External Interface Web Service Details’ and click on search button or hit enter.
  • Click on ‘Manage External Interface Web Service Details’ task in search results.
  • On ‘Manage External Interface Web Service Details’ page, click on ‘Download WSDL for external integration’.
  • Two download options are provided as shown in Figure 2.
  • Download ‘Business Event Connector’.

001

Figure 2 – Download Business Event connector WSDL.

Implementing an ICS ‘Basic Publish to ICS’ integration

ICS allows publishing of events through an ICS trigger endpoint. Events published to ICS could be forwarded to one or more registered subscribers. For the solution, business event connector WSDL downloaded in the previous section is configured to a trigger connection for ‘Publish to ICS’ integration’. These are the overall tasks to build the integration:

  • Create a connection and configure WSDL and security.
  • Create new integration using the previously created connection as trigger and ‘ICS Messaging Service’ as invoke.
  • Activate the Integration and test.

Follow these instructions to configure the integration:

  • Navigate to ‘Designer’ tab and click ‘Connections’ from menu on left.
  • Click on ‘New Connection’. Enter values for required fields.

002Upload the WSDL file previously downloaded from SCM Cloud.

004

  • Configure security, by selecting the “Username Password Token’ as security policy.  Note that the Username and Password entered on this page are irrelevant for a trigger connection.  Since a trigger connection is used to initiate integration in ICS, an ICS username and password must be provided for SCM configuration.

005

  • Save the connection and test. Connection is ready for use in integration.
  • Navigate to “Integrations” page. Click “New Integration” to create a new integration.
  • Select “Basic Publish to ICS” pattern for new integration.

006

  • On integration editor, a “Publish to ICS” flow is displayed. On the left of the flow is the trigger, an entry into the flow. Drag the connection created previously to the trigger.

007

 

  • Configure the trigger. The steps are straightforward, as shown in following screenshots.

008

  • Configure SOAP Operation.

009

  • Click ‘Done’ on summary page.

010

  • Drag and drop ‘ICS Messaging Service’ to the right of the integration flow. No mappings are necessary for this integration pattern.
  • Add a business identifier for tracking and save the integration.

011

  • Add a field that could help uniquely identify the message.

012

  • Activate the integration, by clicking on slider button as shown.

013

  • Note the URL of the integration, by clicking on the info icon. This URL will be used by SCM Cloud as an external web service endpoint.

014

ICS integration to receive business events from SCM Cloud is ready for use.

Implementing an ICS ‘Subscribe to ICS’ integration

Subscribing to events published to ICS can be done through few simple steps. Events could be sent to target connection, for example, a DB connection or a web service endpoint. Here are steps to receive events in a web service.

  • Ensure that there is a “Basic Publish to ICS” integration activated and an Invoke connection to receive events is active.
  • Create a new integration in ICS and pick “Basic Subscribe to ICS” pattern. Enter a name and description for the integration.
  • ICS prompts to select one of available “Basic Publish to ICS” integrations. Select an integration and click on “Use”.

015

  • Integration editor shows a flow with “ICS messaging service” as trigger on left. Drag the Invoke connection to the right of the flow. Following screenshot shows how to define a REST connection for invoke. ICS displays several screens to configure the connection. Steps to configure the connection depend on the type of connection receive the events.

016

  • Complete request and response mappings.
  • Add a tracking field, save and activate the integration. It is now ready to receive events.

Configure SCM Cloud to generate business events

The final task is to configure SCM Cloud to trigger Business Events. Follow these instructions:

 

  • Log into SCM and navigate to Setup and Maintenance.
  • Search for “Manage External Interface Web Service Details”.
  • Click on “Manage External Interface Web Service Details”.

SCM-config-001

  • Add an entry for the external interface web service. Use the endpoint to the “Basic Publish to ICS” integration. Enter credentials to ICS as username and password.

SCM-config-002

  • Search for “Manage Business Event Trigger Points” and click on result.
  • Let’s select “Hold” as a trigger for business event.
  • Click “Active” checkbox next to “Hold”.
  • Click on hold and add a connector under “Associated Connectors”
  • Under “Associated Connectors”, “Actions”, select “Add Row”.
  • Select the “SCM_BusinessEvent” external web service added in previous steps.

SCM-config-004

  • Save the configuration and close.
  • SCM Cloud is now configured to send business events.

Verify generation of Business Events

The solution is ready for testing. SCM Cloud and the “Basic Publish to ICS” integration are sufficient to test the solution. If an ICS subscription flow is implemented, ensure that the event has been received in the target system as well.

 

  • Navigate to “Order Management” work area in SCM Cloud.

Test001

  • Select a sales order and apply hold.

Test002

  • Log into ICS and navigate to “Monitoring” and then to “Tracking” page.
  • Verify that the event has been received under “Tracking”.

Test003

ICS has received a SOAP message from Order Management similar to this one:

<Body xmlns="http://schemas.xmlsoap.org/soap/envelope/">
    <results xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/DooDecompositionOrderStatusUpdateComposite" xmlns:ns4="http://xmlns.oracle.com/apps/scm/doo/decomposition/DooDecompositionOrderStatusUpdateComposite">
        <ns4:OrderHeader>
            <ns4:EventCode>HOLD</ns4:EventCode>
            <ns4:SourceOrderSystem>OPS</ns4:SourceOrderSystem>
            <ns4:SourceOrderId>300000011154333</ns4:SourceOrderId>
            <ns4:SourceOrderNumber>39050</ns4:SourceOrderNumber>
            <ns4:OrchestrationOrderNumber>39050</ns4:OrchestrationOrderNumber>
            <ns4:OrchestrationOrderId>300000011154333</ns4:OrchestrationOrderId>
            <ns4:CustomerId xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
            <ns4:OrderLine>
                <ns4:OrchestrationOrderLineId>300000011154334</ns4:OrchestrationOrderLineId>
                <ns4:OrchestrationOrderLineNumber>1</ns4:OrchestrationOrderLineNumber>
                <ns4:SourceOrderLineId>300000011154334</ns4:SourceOrderLineId>
                <ns4:SourceOrderLineNumber>1</ns4:SourceOrderLineNumber>
                <ns4:OrderFulfillmentLine>
                    <ns4:SourceOrderScheduleId>1</ns4:SourceOrderScheduleId>
                    <ns4:FulfillmentOrderLineId>300000011154335</ns4:FulfillmentOrderLineId>
                    <ns4:FulfillmentOrderLineNumber>1</ns4:FulfillmentOrderLineNumber>
                    <ns4:HoldCode>TD_OM_HOLD</ns4:HoldCode>
                    <ns4:HoldComments>Mani test hold </ns4:HoldComments>
                    <ns4:ItemId>300000001590006</ns4:ItemId>
                    <ns4:InventoryOrganizationId>300000001548399</ns4:InventoryOrganizationId>
                </ns4:OrderFulfillmentLine>
            </ns4:OrderLine>
        </ns4:OrderHeader>
    </results>
</Body>

Summary

This post explained how to publish Order Management events out of Supply chain Management cloud and use ICS publish and subscribe features to capture and propagate those events. This approach is suitable for R11 of SCM cloud and ICS R16.4.1. Subsequent releases of these products might offer equivalent or better event-publishing capabilities out-of-box. Refer to product documentation for later versions before implementing a solution based on this post.

Uploading a file to Oracle storage cloud service using REST API

$
0
0

Introduction

This is the second part of a two part article which demonstrates how to upload data in near-real time from an on-premise oracle database to Oracle Storage Cloud Service.

In the previous article of this series, we demonstrated Oracle GoldenGate functionality to write to a flat file using Apache Flume File Roll Sink. If you would like to read the first part in this article series please visit Oracle GoldenGate : Apply to Apache Flume File Roll Sink

In this article we will demonstrate using the cURL command which will upload the flat file to Oracle Storage Cloud Service.

We used the Oracle Big Data Lite Virtual Machine as the test bed for this article. The VM image is available for download on the Oracle Technology Network website.

Main Article

There are various tools available to access Oracle Storage Cloud Service. According to Best Practices – Data movement between Oracle Storage Cloud Service and HDFS , cURL REST interface is appropriate for this requirement.

cURL REST Interface

REST API

REST API is used to manage containers and objects in the Oracle Storage Cloud Service instance. Anyone can access the REST API from any application or programming platform that understands the Hypertext Transfer Protocol (HTTP) and has Internet connectivity.

cURL is one of the tools used to access the REST interface. cURL is an open source tool used for transferring data which supports various protocols including HTTP and HTTPS. cURL is typically available by default on most UNIX-like hosts. For information about downloading and installing cURL, see Quick Start.

Oracle Storage Cloud Service ( OSCS )

Oracle Storage Cloud Service enables applications to store and manage contents in the cloud. Stored objects can be retrieved directly by external clients or by applications running within Oracle Cloud (For example: Big Data Preparation Cloud Service).

A container is a storage compartment that provides a way to organize the data stored in Oracle Storage Cloud Service. Containers are similar to directories, but with a key distinction; unlike directories, containers cannot be nested.

Prerequisites

First, we need access to the Oracle Storage Cloud Service and information about the Oracle Cloud user name, password, and identity domain.

credentials

Requesting an Authentication Token

Oracle Storage Cloud Service requires authentication for any operation against the service instance. Authentication is performed by using an authentication token. Authentication tokens are requested from the service by authenticating user credentials with the service. All provisioned authentication tokens are temporary and will expire in 30 minutes. We will include a current authentication token with every request to Oracle Storage Cloud Service.

Request an authentication token by running the following cURL command:

curl -v -s -X GET -H ‘X-Storage-User: <my identity domain>:<Oracle Cloud user name>’ -H ‘X-Storage-Pass: <Oracle Cloud user password>’ https://<myIdentityDomain>.storage.oraclecloud.com/auth/v1.0

We ran the above cURL command. The following is the output of this command, with certain key lines highlighted. Note that if the request includes the correct credentials, it returns the HTTP/1.1 200 OK response.

 

OSCS_Auth_token

 

From the output of the command we just ran, note the following:

– The value of the X-Storage-Url header.

This value is the REST endpoint URL of the service. This URL value will be used in the next step to create the container.

-The value of the X-Auth-Token header.

This value is the authentication token, which will be used in the next step to create the container. Note that the authentication token expires after 30 minutes, after the token expires you should request a fresh token.

Creating A Container

Run the following cURL command to create a new container:

curl -v -s -X PUT -H “X-Auth-Token: <Authentication Token ID>” https://storage.oraclecloud.com/v1/Storage-myIdentityDomain/myFirstContainer

– Replace the value of the X-Auth-Token header with the authentication token that you obtained earlier.
– Change https://storage.oraclecloud.com/v1/Storage-myIdentityDomain to the X-Storage-Url header value that you noted while getting an authentication token.
– And change myFirstContainer to the name of the container that you want to create.

Verifying that A Container is created

 Run the following cURL command:

curl -v -s -X GET -H “X-Auth-Token: <Authentication Token ID>” https://storage.oraclecloud.com/v1/Storage-myIdentityDomain/myFirstContainer

If the request is completed successfully, it returns the HTTP/1.1 204 No Content response. This response indicates that there are no objects yet in new container.

In this exercise, as we are not creating a new container. We will use an existing container to upload the file. So we don’t need to verify the container creation .

Uploading an Object

Once Oracle GolgenGate completes writing the records to a file at /u01/ogg-bd/flumeOut directory, the cURL program reads the file present at /u01/ogg-bd/flumeOut directory. Then it uploads the file to create an object in the  container myFirstContainer. Any user with the Service Administrator role or a role that is specified in the X-Container-Write ACL of the container can create an object.

We ran the following cURL command:

curl -v -X PUT -H “X-Auth-Token:  <Authentication Token ID>”-T myfile https://<MyIdentityDomain>.storage.oraclecloud.com/v1/Storage-myIdentityDomain/myFirstContainer/myObject

When running this command we…
– Replaced the value of the X-Auth-Token header with the authentication token that we obtained earlier.
– Changed https://<MyIdentiryDomain>.storage.oraclecloud.com/v1/Storage-myIdentityDomain to the X-Storage-Url header value that we noted while getting an authentication token.
– Changed myFirstContainer to the name of the container that we want to create.
– Changed myfile  to the full path and name of the file that we want to upload
– Changed myObject to the name of the object that we want to create in the container

If the request is completed successfully, it returns the HTTP/1.1 201 Created response, as shown in the following output. We verified the full transfer by comparing “Content-Length” value.

 

Upload_to_OSCS

 

We also verified the proper transfer of the file to Oracle Storage Cloud Service using Big Data Preparation Cloud Service.

BDPCS_Source

Summary

In this article we demonstrated the functionality of REST API which uploads the data from the On Premise Big Data Lite VM to Oracle Storage Cloud Service.  After combining both articles we demonstrated the functionality of moving data on near real-time from the On-premise Oracle database to Oracle Storage Cloud Service using Oracle Golden Gate and REST API.


Bulk import of sales transactions into Oracle Sales Cloud Incentive Compensation using Integration Cloud Service

$
0
0

Introduction

Sales Cloud Incentive Compensation application provides API to import sales transactions in bulk. These could be sales transactions exported out of an ERP system. Integration Cloud Service (ICS) offers extensive data transformation and secure file transfer capabilities that could be used to orchestrate, administer and monitor file transfer jobs. In this post, let’s look at an ICS implementation to transform and load sales transactions into Incentive Compensation. Instructions provided in this post are applicable to Sales Cloud Incentive Compensation R11 and ICS R16.4.1 or higher.

Main Article

Figure 1 provides an overview of the solution described in this post. A text file contains sales transactions, in CSV format, exported out of ERP Cloud. ICS imports the file from a file server using SFTP, transforms the data to a format suitable for Incentive Compensation and submits an import job to Sales Cloud. The data transfer is over encrypted connections end-to-end. ICS is Oracle’s enterprise-grade iPaaS offering, with adapters for Oracle SaaS and other SaaS applications and native adapters that allow connectivity to most cloud and on-premise applications. To learn more about ICS, refer to documentation at this link.

Figure1

Figure 1 – Overview of the solution

Implementation of the solution requires the following high-level tasks.

For the solution to work, ICS should be able to connect with Sales Cloud and File Server.  ICS agents can easily enable connectivity, if one of these systems are located on-premise, behind a firewall.

Configuring a file server to host ERP export file and enable SFTP

A File Server is an optional component of the solution. If the source ERP system that produces CSV file allows Secure FTP access, ICS could connect to it directly. Otherwise, a file server could host exported files from ERP system. One way to quickly achieve this is to provision a compute note on Oracle Public Cloud and enable SFTP access to a staging folder with read/write access to ERP system and ICS.

Defining data mapping for file-based data import service

File-based data import service requires that each import job specify a data mapping. This data mapping helps the import service assign the fields in input file content to fields in Incentive Compensation Transaction object. There are two ways to define such mapping.

  • Import mapping from a Spreadsheet definition
  • Define a new import by picking and matching fields on UI

Here are the steps to complete import mapping:

  • Navigate to “Setup and Maintenance”.

Figure2

  • Search for “Define File Import” task list.

Figure3

  • Click on “Manage File Import Mappings” task from list.

Figure4

  • On next page, there are options to look-up existing mapping or create a new one for specified object type. The two options, import from file or create a new mapping are highlighted.

Figure5

  • If you have a Excel mapping definition, then click on “Import Mapping” , provide information and click “OK”.

Figure6

  • Otherwise, click a new mapping by clicking on “Actions”->”Create”.

Figure7

  • Next page allows field-by-field mapping, between the CSV file’s fields and fields under “Incentive Compensation Transactions”.

Figure8

The new mapping is now ready for use.

Identifying Endpoints

Importing sales transaction require a file import web service and another optional web service to collect transactions.

  • Invoke file-based data import and export service with transformed and encoded file content.
  • Invoke ‘Manage Process Submission’ service with a date range for transactions.

File-based data import and export service could be used to import and data out of all applications on Sales Cloud. For the solution we’ll use “submitImportActivity” operation.  WSDL is typically accessible at this URL for Sales Cloud R11.

https://<Sales Cloud CRM host name>:<CRM port>/mktImport/ImportPublicService

The next task could be performed by logging into Incentive Compensation application or by invoking a web service. ‘Manage Process Submission’ service is specific to Incentive Compensation application. The file-based import processes input and loads the records into staging tables.  ‘submitCollectionJob’ operation of ‘Manage Process Submission’ service initiates the processing of the staged records into Incentive Compensation. This service is typically accessible at this URL. Note that this action can also be performed in Incentive Compensation UI, as described in the final testing section of this post.

https://<IC host name>:<IC port number>/publicIncentiveCompensationManageProcessService/ManageProcessSubmissionService

Implementing an ICS Orchestration

An ICS orchestration glues the other components together in a flow. ICS orchestrations provide flexible ways to invoke, such as a scheduled triggers or an API interface. Orchestrations can perform variety of tasks and implement complex integration logic. For the solution described in this post, ICS needs to perform the following tasks:

  • Connect to file server and import files that matches specified filename pattern.
  • Parse through file contents and for each record, transform the record to the format required by Incentive Compensation.
  • Convert the transformed file contents to Base64 format and store in a string variable.
  • Invoke File-based data import web service, with Base64-encoded data.Note this service triggers import process by does not wait for its completion.
  • Optionally, the service could invoke “Manage Submission Service” after a delay to ensure that the file-based import completed in Sales Cloud.

For sake of brevity, only the important parts of the orchestration are addressed in detailhere. Refer to ICS documentation for more information on building orchestrations.

 

FTP adapter configuration

FTP adapters could be used with ‘Basic Map my data’ or Orchestration patterns. To create a new FTP connection, navigate to “Connections” tab, click on “New Connection” and choose FTP as type of connection.

Under “Configure Connection” page, set “SFTP” drop down to “Yes”. FTP adapter allows login through SSL certificate or username and password.

Figure9

In “Configure Security” page, provide credentials, such as username password or password for a SSL certificate. FTP adapter also supports PGP encryption of content.

Figure10

Transforming the source records to destination format

Source data from ERP could be in a different format than the format required by target system. ICS provides a sophisticated mapping editor to map fields of source record to target record. Mapping could be as easy as drag & drop of fields from source to target, or could be set using complex logic using XML style sheet language (XSLT).  Here is a snapshot of the mapping used for transformation, primarily to convert date string from one format to another.

Figure15

Mapping for SOURCE_EVENT_DATE requires a transformation, which is done using transformation editor, as shown.

Figure16

Converting file content to a Base64-encoded string

File-based data import service requires the content of a CSV file to be Base64-encoded. This encoding could be done using simple XML schema to be used in the FTP invoke task of the orchestration. Here is the content of the schema.

<schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/" xmlns="http://www.w3.org/2001/XMLSchema">
<element name="opaqueElement" type="base64Binary"/>
</schema>

To configure a new FTP connection, drag and drop a connection, configured previously.
Figure11

Select operations settings as shown.
Figure12

Choose options to select an existing schema.

Figure13

Pick the schema file containing the schema.

Figure14The FTP invoke is ready to get a file via SFTP and return the contents to the orchestration as a Base64-encoded string. Map the content as to a field in SOAP message to be sent to Incentive Compensation.

Testing the solution

To test the solution place a CSV formatted file at the stageing folder on file server. Here is sample content from source file.

SOURCE_TRX_NUMBER,SOURCE_EVENT_DATE,CREDIT_DATE,ROLLUP_DATE,TRANSACTION_AMT_SOURCE_CURR,SOURCE_CURRENCY_CODE,TRANSACTION_TYPE,PROCESS_CODE,BUSINESS_UNIT_NAME,SOURCE_BUSINESS_UNIT_NAME,POSTAL_CODE,ATTRIBUTE21_PRODUCT_SOLD,QUANTITY,DISCOUNT_PERCENTAGE,MARGIN_PERCENTAGE,SALES_CHANNEL,COUNTRY
TRX-SC1-000001,1/15/2016,1/15/2016,1/15/2016,1625.06,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU1,8,42,14,DIRECT,US
TRX-SC1-000002,1/15/2016,1/15/2016,1/15/2016,1451.35,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU2,15,24,13,DIRECT,US
TRX-SC1-000003,1/15/2016,1/15/2016,1/15/2016,3033.83,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU3,13,48,2,DIRECT,US

After ICS fetches this file and transforms content, it invokes file-based data import service, with the payload shown below.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/oracle/apps/marketing/commonMarketing/mktImport/model/types/" xmlns:mod="http://xmlns.oracle.com/oracle/apps/marketing/commonMarketing/mktImport/model/">
 <soapenv:Header/>
 <soapenv:Body>
 <typ:submitImportActivity>
 <typ:importJobSubmitParam>
 <mod:JobDescription>Gartner demo import</mod:JobDescription>
 <mod:HeaderRowIncluded>Y</mod:HeaderRowIncluded>
 <mod:FileEcodingMode>UTF-8</mod:FileEcodingMode>
 <mod:MappingNumber>300000130635953</mod:MappingNumber>
 <mod:ImportMode>CREATE_RECORD</mod:ImportMode>
 <mod:FileContent>U09VUkNFX1.....JUkVDVCxVUw==</mod:FileContent>
 <mod:FileFormat>COMMA_DELIMITER</mod:FileFormat>
 </typ:importJobSubmitParam>
 </typ:submitImportActivity>
 </soapenv:Body>
</soapenv:Envelope>


At this point, import job has been submitted to Sales Cloud. Status of file import job could be tracked on Sales Cloud, under ‘Set and Maintenance’. by opening “Manage File Import Activities”. As shown below, there are several Incentive Compensation file imports have been submitted, in status ‘Base table upload in progress’.

Figure17

Here is a more detailed view of one job, opened by clicking on status column of the job. This job has imported records into a staging table.

Figure18

To complete the job and see transactions in Incentive Compensation, follow one of the these two methods.

  • Navigate to “Incentive Compensation” -> “Credits and Earnings” and click on “Collect Transactions” to import data
  • OR, Invoke ‘Manage Process Submission’ service with payload similar to sample snippet below.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/incentiveCompensation/cn/processes/manageProcess/manageProcessSubmissionService/types/">
   <soapenv:Header/>
   <soapenv:Body>
      <typ:submitCollectionJob>
         <typ:scenarioName>CN_IMPORT_TRANSACTIONS</typ:scenarioName>
         <typ:scenarioVersion>001</typ:scenarioVersion>
         <typ:sourceOrgName>US1 Business Unit</typ:sourceOrgName>
         <typ:startDate>2016-01-01</typ:startDate>
         <typ:endDate>2016-01-31</typ:endDate>
         <typ:transactionType>Invoice</typ:transactionType>
      </typ:submitCollectionJob>
   </soapenv:Body>
</soapenv:Envelope>

Finally, verify that transactions are visible under Incentive Compensation, by navigating to “Incentive Compensation” -> “Credits and Earnings”, from home page and by clicking on “Manage Transactions”.

Figure19

Summary

This post explained a solution to import transactions into Incentive Compensation using web services provided by Sales Cloud and Incentive Compensation application. It also explained several features of Integration Cloud Service utilized to orchestrate the import. The solution discussed in this post is suitable for Sales Cloud R11 and ICS R16.4.1. Subsequent releases of these products might offer equivalent or better capabilities out-of-box. Refer to product documentation for later versions before implementing a solution based on this post.

 

 

Integrating Big Data Preparation (BDP) Cloud Service with Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

This article presents an overview of how to integrate Big Data Preparation Cloud Service (BDP) with Business Intelligence Cloud Service (BICS).  BDP is a big data cloud service designed for customers interested on cleansing, enriching, and transforming their structured and unstructured business data.  BICS is a business intelligence cloud service designed for customers interested on gaining insights into their data with interactive visualizations, data model designs, reports and dashboards.  BDP and BICS are both cloud services under the Oracle Platform as a Service (PaaS).

Users can upload data into BICS using various tools and technologies such as Data Sync, Oracle Application Express, PL/SQL, BICS REST APIs, Oracle Data Integrator, among other tools.  The BICS REST APIs allows users to programmatically load large volumes of data from both on-premise and cloud data sources into the cloud database service connected to BICS.

In BDP, users can define BICS as a target, and publish the results of a BDP transform script into the cloud database service connected to BICS.  BDP uses the BICS REST APIs to accomplish the integration with BICS.  BDP users do not need to write REST APIs programs or learn how to use the BICS REST APIs – the only requirement is to create a BDP connection to access the cloud database service of BICS.  BDP transform scripts can be executed as BDP policies, and the results can be published directly into BICS.

The next sections of this article demonstrate how to create, execute, and publish the results of a BDP transform script into BICS.

 

Integrating Big Data Preparation (BDP) Cloud Service with Business Intelligence Cloud Service (BICS)

 

Figure 1, below, illustrates the BDP main dashboard, which includes a list of metrics such as the number of executed jobs, total configured sources, and the number of rows processed and transformed by the BDP cloud service.  A Quick Start option is also available for easy-access when creating sources, transform scripts, and uploading data into BDP.

 

Figure 1 - BDP Dashboard & Overview Page

Figure 1 – BDP Dashboard & Overview Page

The next sections of this article discuss the following concepts:

 

  • How to create a connection in BDP to access BICS.  In BDP, a connection is known as a BDP Source, and the BDP Source can be used as either a source or target connection. In this article, a BDP Source will be created and used as a target connection to publish into BICS the data results of running a BDP policy.
  • How to create a BDP Transform Script that uses two source datasets.  In BDP, this is known as blending two source datasets or files.
  • How to create and execute a BDP Policy.  BDP policies are used in BDP to configure the executions of BDP transform scripts.  In this article, a BDP policy will be created to run a BDP transform script and publish the data results into BICS.
  • How to view and use the published BDP dataset in the BICS data model and the BICS Visual Analyzer.

The first step to integrate BDP with BICS is to create a BDP Source connection.  Use the Quick Start menu to create a new BDP Source, and select CREATE SOURCEFigure 2, below, illustrates how to create this BDP Source connection.  Enter the name of the BDP Source, and select Oracle BICS as the connection type.

 

Figure 2 - Creating a BDP Source - Oracle BICS

Figure 2 – Creating a BDP Source – Oracle BICS

Enter the Service URL, Username, Password, and Domain of the BICS cloud service as shown on Figure 3, below.  The Service URL is the Service Instance URL of the BICS cloud service.

 

Figure 3 - Creating a BDP Source - Oracle BICS Definition

Figure 3 – Creating a BDP Source – Oracle BICS Definition

 

Test the new BDP Source using the Test option as shown on Figure 3, above.  Save the new BDP Source once the test connection is successful.

The next step is to create a BDP transform script.  Use the Quick Start menu to create a new BDP transform script, and select CREATE TRANSFORMFigure 4, below, illustrates an example of a new transform script called A_TEAM_CUSTOMERS.

 

Figure 4 - Creating a BDP Transform - Customer Accounts

Figure 4 – Creating a BDP Transform – Customer Accounts

On Figure 4, above, the BDP Source for this transform script is called A_TEAM_STORAGE.  This BDP Source has been previously defined by a BDP user.  The source type for this BDP Source is Oracle Cloud Storage.  A structured XSL file called ATeamCustomerAccounts.xls has been previously imported into this BDP Source.  This XLS file is used as the source dataset for this new BDP transform script.

 

 

Once the new BDP transform script is defined, it is then submitted to the BDP engine for data ingestion, data profiling, data de-duplication, and detection of sensitive data.  The BDP engine displays a notification on screen when the BDP transform script is ready for transformations.

Figure 5, below, shows the transform script called A_TEAM_CUSTOMERS after the BDP ingesting process.  A series of transformations have been added by the BDP user.  These transformations are illustrated on the Transform Script section as follow:

 

  • Columns Col_0001, Col_0005, and date_02 have been renamed – respectively – to cust_num, middle_initial, and exp_dt.
  • Column City has been enriched with a new data element (column) called Population.
  • The email domain has been extracted from column email.
  • Columns Col_0013, Col_0014, and Col_0019 have been removed from the transform script.
  • Columns us_phone and exp_dt have been reformatted to (999) 999-9999 and MM-dd-yyyy, respectively.
  • Columns credit_card and us_ssn (social security number) have been obfuscated to the first 12 and 5 digits, respectively.  BDP has detected that these two columns contain sensitive data.

 

Figure 5 - Creating a BDP Transform - Transform Script

Figure 5 – Creating a BDP Transform – Transform Script

The BDP section called Recommendations, on Figure 5, above, can be used to enrich the transform script with additional data elements – a feature that is part of the BDP cloud service.  For instance, the transformation script can be enriched with new columns such as country_name, capital, continent, and square_km, among others.

The Column Profile section, on Figure 5, above, provides a set of metrics for each of the columns found on the source file.  In this example, the column called first_name has been profiled as follow:

 

  • A total of 15,000 rows where found on the source file.
  • A total of 2,101 distinct values, 14.01% of total rows, were found on this column.
  • A total of 12,899 duplicate names, 85.99% of total rows, were found on this column.
  • A total of 10 distinct patterns were found on this column.
  • The type for this column is TEXT.
  • The bubble graph illustrates the most common data values for this column: Mich, Robert, Mary, James, and John are the most common first names.

In BDP, users can join an existing dataset of a transform script with one additional file – this feature is known as Blending.    To blend the existing dataset of a transform script with another file, select the Blend option located on the Transform Script page.  Figure 6, below, shows how to add a file.

 

Figure 6 - Creating a BDP Transform - Adding File to Blend

Figure 6 – Creating a BDP Transform – Adding File to Blend

In this example, on Figure 6, above, a json file called ATeamCustomerTransactionsLog.json will be blended with an existing dataset of a transform script.  This json file contains customer transaction logs, which will be used to enrich and add additional data columns to the BDP transform script.

Once the additional file is added to the BDP transform script, the BDP engine analyzes the new file, and recommends a set of blending conditions.  Figure 7, below, shows the recommended blending conditions or blending keys for these two datasets.

 

Figure 7 - Creating a BDP Transform - Blending Configuration

Figure 7 – Creating a BDP Transform – Blending Configuration

In this example, on Figure 7, above, the BDP engine has recommended to use the cust_num column, which has been found on both datasets, as the blending condition.  BDP has an underlying discovery engine that provides blending key recommendations based on data profiling.  When blending two datasets, BDP users can choose one of three types of output options:

 

  • Rows matching both datasets – All rows on both datasets must match the blending condition.  Those rows that do not match the blending condition – on either dataset – will be removed from the blended dataset.
  • Left Join – All rows from the first dataset will be included on the blended dataset even if the rows from the first dataset do not meet the blending condition.
  • Right Join – All rows from the second dataset will be included in the blended dataset even if the rows from the second dataset do not meet the blending condition.

Once the blending configuration is ready for submission, the BDP engine performs the blending operation.  A message will be displayed on screen when the blending operation is complete and the transform script is ready for additional modifications.  The BDP transform script will show a combined set of columns from both the first dataset and the second dataset.  BDP users will be able to perform additional transformations or accept additional recommendations on this blending BDP transform script.

In order to execute a BDP transform script, BDP users must create a BDP PolicyFigure 8, below, shows the configuration of a BDP Policy.

 

Figure 8 - Creating a BDP Policy - Policy Details

Figure 8 – Creating a BDP Policy – Policy Details

When configuring a BDP Policy, as shown on Figure 8, above, BPD users must specify the following parameters:

 

  • Name of the BDP Policy.
  • Name of the BDP transform script.
  • If the transform script is a blending script, two source datasets are required: Source 1 and Source 2.
  • Name of the Target output.  In this example, the target output is BICS.  This is the BDP BICS Source created on a previous section of this article.
  • The scheduling information such as Time, Start Date, and End Date are required parameters as well.

The BDP Policy can be executed by selecting the Run option from the Policies screen, as shown on Figure 9, below:

 

Figure 9 - Creating a BDP Policy - Running Policy

Figure 9 – Creating a BDP Policy – Running Policy

Once the BDP Policy is submitted for execution, BDP users can monitor the progress of its execution using the BDP Job Details screen.  Figure 10, below, shows an example.

 

Figure 10 - Running a BDP Policy - Job Details

Figure 10 – Running a BDP Policy – Job Details

The Job Details screen, on Figure 10, above, shows the job Id and policy name:  4798122, and A_TEAM_CUSTOMERS, respectively.  The status of the execution of this policy is succeeded – the policy has been executed successfully.  The Metrics section shows a total of 15K rows – this is the number of rows that met the blending condition.  A total of 15K rows were transformed, and no errors were found during the execution of the policy.  Additional execution metrics can be found under sections:  Ingest Metrics, Prepare Metrics, and Publish Status.

When BDP executes a policy that uses BICS as a target, the BICS RESTful APIs are invoked, and the result-set of the BDP policy gets published into BICS.  BDP uses the name of the BDP policy to create a table in the database that is connected to BICS.

Figure 11, below, illustrates the name of the table, A_TEAM_CUSTOMERS, created by BDP during the execution of the BDP policy.

 

Figure 11 - Integrating BICS with BDP - Inspecting the BDP Data

Figure 11 – Integrating BICS with BDP – Inspecting the BDP Data

The newly published table can be seen on the BICS data model module, as shown on Figure 11, above.  In this example, on Figure 11, above, some of the dataset columns are illustrated:

 

  • The credit card number (CREDIT_CARD) and the social security (US_SSN) have been obfuscated.
  • The US phone (US_PHONE) and the credit card expiration date (EXP_DT) has been reformatted.

In BICS, this new dataset can be used to create warehouse facts and dimensions.  Furthermore, BICS users can expand the use of this dataset to other BICS features such as BICS Visual Analyzer.  Figure 12, below, shows an example of how this dataset, A_TEAM_CUSTOMERS, is used in Visual Analyzer:

 

Figure 12 - Integrating BICS with BDP - Creating a Project in BICS Visual Analyzer

Figure 12 – Integrating BICS with BDP – Creating a Project in BICS Visual Analyzer

In this example, on Figure 12, above, a set of metrics, the A-Team Metrics, have been created on BICS Visual Analyzer.  A new tile chart, Revenue Amount By State, has been created as well.  This tile chart uses an aggregated value, REVENUE_AMT, to sum revenue by state.  The source of the REVENUE_AMT column comes from the blended dataset – a source column from the blend file, ATeamCustomerTransactionsLog.json, the customer transaction log file.  The source of the state column comes from the blended dataset as well – a source column from the first dataset, ATeamCustomerAccounts.xls – the customer accounts file.

 

Conclusion

 

BDP users can publish the data produced by BDP policies directly into BICS without writing additional programs or having to learn BICS RESTful APIs.  In BICS, the data results of an executed BDP policy can be modeled as facts and dimensions.  BICS users can then create dashboards and reports with data that has been transformed by BDP.

 

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

 

ODI Related Articles

Oracle Big Data Preparation (BDP) Cloud Service Introduction – Video

Oracle Big Data Preparation (BDP) Cloud Service Quick Introduction – Video

Oracle BI Cloud Service REST APIs

Loading Data in Oracle Database Cloud Service

Extracting Data from BICS / Apex via RESTful Webservices

Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

 

 

Round Trip On-Premise Integration (Part 1) – ICS to EBS

$
0
0

One of the big challenges with adopting Cloud Services Architecture is how to integrate the on-premise applications when the applications are behind the firewall. A very common scenario that falls within this pattern is cloud integration with Oracle E-Business Suite (EBS). To address this cloud-to-ground pattern without complex firewall configurations, DMZs, etc., Oracle offers a feature with the Integration Cloud Service (ICS) called Connectivity Agent (additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration). Couple this feature with the EBS Cloud Adapter in ICS and now we have a viable option for doing ICS on-premise integration with EBS. The purpose of this A-Team blog is to detail the prerequisites for using the EBS Cloud Adapter and walk through a working ICS integration to EBS via the Connectivity Agent where ICS is calling EBS (EBS is the target application). The blog is also meant to be an additional resource for the Oracle documentation for Using Oracle E-Business Suite Adapter.

The technologies at work for this integration include ICS (Inbound REST Adapter, Outbound EBS Cloud Adapter), Oracle Messaging Cloud Service (OMCS), ICS Connectivity Agent (on-premise), and Oracle EBS R12.  The integration is a synchronous (request/response) to EBS where a new employee will be created via the EBS HR_EMPLOYEE_API. The flow consists of a REST call to ICS with a JSON payload containing the employee details.  These details are then transformed in ICS from JSON to XML for the EBS Cloud Adapter. The EBS adapter then sends the request to the on-premise connectivity agent via OMCS. The agent then makes the call to EBS where the results will then be passed back to ICS via OMCS. The EBS response is transformed to JSON and returned to the invoking client. The following is a high-level view of the integration:

ICSEBSCloudAdapter-Overview

For a video summary of this blog, see Oracle Integration Cloud Service to Oracle E-Business Suite Round Trip Integration

Prerequisites

1. Oracle E-Business Suite 12.1.3* or higher.
2. EBS Configured for the EBS Cloud Adapter per the on-line document: Setting Up Oracle E-Business Suite Adapter from Integration Cloud Service.
a. ISG is configured for the EBS R12 Environment.
b. EBS REST services are configured in ISG.
c. Required REST services are deployed in EBS.
d. Required user privileges granted for the deployed REST services in EBS.
3. Install the on-premise Connectivity Agent (see Integration Cloud Service (ICS) On-Premise Agent Installation).

* For EBS 11 integrations, see another A-Team Blog E-Business Suite Integration with Integration Cloud Service and DB Adapter.

Create Connections

1. Inbound Endpoint Configuration.
a. Start the connection configuration by clicking on Create New Connection in the ICS console:
ICSEBSCloudAdapter-Connections_1-001
b. For this blog, we will be using the REST connection for the inbound endpoint. Locate and Select the REST Adapter in the Create Connection – Select Adapter dialog:
ICSEBSCloudAdapter-Connections_1-002
c. Provide a Connection Name in the New Connection – Information dialog:
ICSEBSCloudAdapter-Connections_1-003
d. The shell of the REST Connection has now been created. The first set of properties that needs to be configured is the Connection Properties. Click on the Configure Connectivity button and select REST API Base URL for the Connection Type. For the Connection URL, provide the ICS POD host since this is an incoming connection for the POD. A simple way to get the URL is to copy it from the browser location of the ICS console being used to configure the connection:
ICSEBSCloudAdapter-Connections_1-004
e. The last set of properties that need to be configured are the Credentials. Click on the Configure Credentials button and select Basic Authentication for the Security Policy. The Username and Password for the basic authentication will be a user configured on the ICS POD:
ICSEBSCloudAdapter-Connections_1-005
f. Now that we have all the properties configured, we can test the connection. This is done by clicking on the Test icon at the top of the window. If everything is configured correctly, a message of The connection test was successful!:
ICSEBSCloudAdapter-Connections_1-006
2. EBS Endpoint Connection
a. Create another connection, but this time select Oracle E-Business Suite from the Create Connection – Select Adapter dialog:
ICSEBSCloudAdapter-Connections_2-001
b. Provide a Connection Name in the New Connection – Information dialog:
ICSEBSCloudAdapter-Connections_2-002
c. Click on the Configure Connectivity button and for the EBS Cloud Adapter there is only one property, the Connection URL. This URL will be the hostname and port where the EBS metadata has been deployed for EBS. This metadata is provided by Oracle’s E-Business Suite Integrated SOA Gateway (ISG) and the setup/configuration of ISG can be found under the Prerequisites for this blog (item #2). The best way to see if the metadata provider has been deployed is to access the WADL using a URL like the following: http://ebs.example.com:8000/webservices/rest/provider?WADL where ebs.example.com is the hostname of your EBS metatdata provider machine. The URL should provide something like the following:
<?xml version = '1.0' encoding = 'UTF-8'?>
<application name="EbsMetadataProvider" targetNamespace="http://xmlns.oracle.com/apps/fnd/soaprovider/pojo/ebsmetadataprovider/" xmlns:tns="http://xmlns.oracle.com/apps/fnd/soaprovider/pojo/ebsmetadataprovider/" xmlns="http://wadl.dev.java.net/2009/02" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns1="http://xmlns.oracle.com/apps/fnd/rest/provider/getinterfaces/" xmlns:tns2="http://xmlns.oracle.com/apps/fnd/rest/provider/getmethods/" xmlns:tns3="http://xmlns.oracle.com/apps/fnd/rest/provider/getproductfamilies/" xmlns:tns4="http://xmlns.oracle.com/apps/fnd/rest/provider/isactive/">
   <grammars>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getinterfaces_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getmethods_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getproductfamilies_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=isactive_post.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
   <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getinterfaces_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getmethods_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=getproductfamilies_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
      <include href="http://ebs.example.com:8000/webservices/rest/provider/?XSD=isactive_get.xsd" xmlns="http://www.w3.org/2001/XMLSchema"/>
   </grammars>
   <resources base="http://ebs.example.com:8000/webservices/rest/provider/">
      <resource path="getInterfaces/{product}/">
         <param name="product" style="template" required="true" type="xsd:string"/>
         <method id="getInterfaces" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
               <param name="scopeFilter" type="xsd:string" style="query" required="true"/>
               <param name="classFilter" type="xsd:string" style="query" required="true"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns1:getInterfaces_Output"/>
               <representation mediaType="application/json" type="tns1:getInterfaces_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getInterfaces/">
         <method id="getInterfaces" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns1:getInterfaces_Input"/>
               <representation mediaType="application/json" type="tns1:getInterfaces_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns1:getInterfaces_Output"/>
               <representation mediaType="application/json" type="tns1:getInterfaces_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getMethods/{api}/">
         <param name="api" style="template" required="true" type="xsd:string"/>
         <method id="getMethods" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
               <param name="scopeFilter" type="xsd:string" style="query" required="true"/>
               <param name="classFilter" type="xsd:string" style="query" required="true"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns2:getMethods_Output"/>
               <representation mediaType="application/json" type="tns2:getMethods_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getMethods/">
         <method id="getMethods" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns2:getMethods_Input"/>
               <representation mediaType="application/json" type="tns2:getMethods_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns2:getMethods_Output"/>
               <representation mediaType="application/json" type="tns2:getMethods_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getProductFamilies/">
         <method id="getProductFamilies" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
               <param name="scopeFilter" type="xsd:string" style="query" required="true"/>
               <param name="classFilter" type="xsd:string" style="query" required="true"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns3:getProductFamilies_Output"/>
               <representation mediaType="application/json" type="tns3:getProductFamilies_Output"/>
            </response>
         </method>
      </resource>
      <resource path="getProductFamilies/">
         <method id="getProductFamilies" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns3:getProductFamilies_Input"/>
               <representation mediaType="application/json" type="tns3:getProductFamilies_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns3:getProductFamilies_Output"/>
               <representation mediaType="application/json" type="tns3:getProductFamilies_Output"/>
            </response>
         </method>
      </resource>
      <resource path="isActive/">
         <method id="isActive" name="GET">
            <request>
               <param name="ctx_responsibility" type="xsd:string" style="query" required="false"/>
               <param name="ctx_respapplication" type="xsd:string" style="query" required="false"/>
               <param name="ctx_securitygroup" type="xsd:string" style="query" required="false"/>
               <param name="ctx_nlslanguage" type="xsd:string" style="query" required="false"/>
               <param name="ctx_language" type="xsd:string" style="query" required="false"/>
               <param name="ctx_orgid" type="xsd:int" style="query" required="false"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns4:isActive_Output"/>
               <representation mediaType="application/json" type="tns4:isActive_Output"/>
            </response>
         </method>
      </resource>
      <resource path="isActive/">
         <method id="isActive" name="POST">
            <request>
               <representation mediaType="application/xml" type="tns4:isActive_Input"/>
               <representation mediaType="application/json" type="tns4:isActive_Input"/>
            </request>
            <response>
               <representation mediaType="application/xml" type="tns4:isActive_Output"/>
               <representation mediaType="application/json" type="tns4:isActive_Output"/>
            </response>
         </method>
      </resource>
   </resources>
</application>

 

If you don’t get something like the above XML, here are some general troubleshooting steps:
Login to EBS console
–> Integrated SOA Gateway
—-> Integration Repository
——> Click on “Search” button on the right
——–> Enter “oracle.apps.fnd.rep.ws.service.EbsMetadataProvider” in the field “Internal Name”
———-> Click “Go” (If this doesn’t list anything, you are missing a patch on the EBS instance. Please follow the Note. 1311068.1)
————> Click on “Metadata Provider”
————–> Click on “REST Web Service” tab
—————-> Enter “provider” as is in the “Service Alias” field and click the button “Deploy”
——————> Navigate to “Grants” tab and give grants on all methods.
If the WADL shows that the metadata provider is deployed and ready, the Connection URL is simply the host name and port where the metatdata provider is deployed. For example, http://ebs.example.com:8000
ICSEBSCloudAdapter-Connections_2-003
d. The next set of properties that need to be configured are the Credentials. Click on the Configure Credentials button and select Basic Authentication for the Security Policy. The Username and Password for the basic authentication will be a user configured on the on-premise EBS environment granted privileges to access the EBS REST services:
ICSEBSCloudAdapter-Connections_2-004
NOTE: The Property Value for Username in the screen shot above shows the EBS sysadmin user. This will most likely “not” be the user that has grants on the EBS REST service. If you use the sysadmin user here and your integration (created later) “fails at runtime” with a “Responsibility is not assigned to user” error from EBS, either the grants on the EBS REST service are not created or a different EBS user needs to be specified for this connection. Here is an example error you might get:
<ISGServiceFault>
    <Code>ISG_USER_RESP_MISMATCH</Code>
    <Message>Responsibility is not assigned to user</Message>
    <Resolution>Please assign the responsibility to the user.</Resolution>
    <ServiceDetails>
        <ServiceName>HREmployeeAPISrvc</ServiceName>
        <OperationName>CREATE_EMPLOYEE</OperationName>
        <InstanceId>0</InstanceId>
    </ServiceDetails>
</ISGServiceFault>
e. Finally, we need to associate this connection with the on-premise Connectivity Agent that was configured as a Prerequisite. To do this, click on the Configure Agents button and select the agent group that contains the running on-premise Connectivity Agent:
ICSEBSCloudAdapter-Connections_2-005
f. Now that we have all the properties configured, we can test the connection. This is done by clicking on the Test icon at the top of the window. If everything is configured correctly, a message of The connection test was successful!:
ICSEBSCloudAdapter-Connections_2-006
3. We are now ready to construct our cloud-to-ground integration using ICS and the connections that were just created.

Create Integration

1. Create New Integration.
a. Navigate to the Integrations page of the Designer section. Then click on Create New Integration:
ICSEBSCloudAdapter-CreateIntegration_1-001
b. In the Create Integration – Select a Pattern dialog, locate the Map My Data and select it:
ICSEBSCloudAdapter-CreateIntegration_1-002
c. Give the new integration a name and click on Create:
ICSEBSCloudAdapter-CreateIntegration_1-003
2. Configure Inbound Endpoint.
a. The first thing we will do is to create our inbound endpoint (entry point to the ICS integration). In the Integration page that opened from the previous step, locate the Connections section and find the REST connection configured earlier. Drag-and-Drop that connection onto the inbound (left-hand side) of the integration labeled “Drage and Drop a Trigger”:
ICSEBSCloudAdapter-CreateIntegration_2-001
b. Since the focus of this blog is on the EBS Adapter, we will not go into the details of setting up this endpoint. The important details for this integration is that the REST service will define both the request and the response in JSON format:

Example Request:

{
  "CREATE_EMPLOYEE_Input": {
    "RESTHeader": {
      "Responsibility": "US_SHRMS_MANAGER",
      "RespApplication": "PER",
      "SecurityGroup": "STANDARD",
      "NLSLanguage": "AMERICAN",
      "Org_Id": "204"
    },
    "InputParameters": {
      "HireDate": "2016-01-01T09:00:00",
      "BusinessGroupID": "202",
      "LastName": "Sled",
      "Sex": "M",
      "Comments": "Create From ICS Integration",
      "DateOfBirth": "1991-07-03T09:00:00",
      "EMailAddress": "bob.sled@example.com",
      "FirstName": "Robert",
      "Nickname": "Bob",
      "MaritalStatus": "S",
      "MiddleName": "Rocket",
      "Nationality": "AM",
      "SocialSSN": "555-44-3333",
      "RegisteredDisabled": "N",
      "CountryOfBirth": "US",
      "RegionOfBirth": "Montana",
      "TownOfBirth": "Missoula"
    }
  }
}

Example Response:

{
  "CreateEmployeeResponse": {
    "EmployeeNumber": 2402,
    "PersonID": 32871,
    "AssignmentID": 34095,
    "ObjectVersionNumber": 2,
    "AsgObjectVersionNumber": 1,
    "EffectiveStartDate": "2016-01-01T00:00:00.000-05:00",
    "EffectiveEndDate": "4712-12-31T00:00:00.000-05:00",
    "FullName": "Sled, Robert Rocket (Bob)",
    "CommentID": 1304,
    "AssignmentSequence": null,
    "AssignmentNumber": 2402,
    "NameCombinationWarning": 0,
    "AssignPayrollWarning": 0,
    "OrigHireWarning": 0
  }
}
ICSEBSCloudAdapter-CreateIntegration_2-002
3. Configure Outbound Endpoint.
a. Now we will configure the endpoint to EBS. In the Integration page, locate the Connections section and find the E-Business Suite adapter connection configured earlier. Drag-and-Drop that connection onto the outbound (right-hand side) of the integration labeled “Drage and Drop an Invoke”:
ICSEBSCloudAdapter-CreateIntegration_3-001
b. The Configure Oracle E-Business Suite Adapter Endpoint configuration window should now be open. Provide a meaningful name for the endpoint and press Next >. If the windows hangs or errors out, check to make sure the connectivity agent is running and ready. This endpoint is dependent on the communication between ICS and EBS via the connectivity agent.
ICSEBSCloudAdapter-CreateIntegration_3-002
c. At this point, the adapter has populated the Web Services section of the wizard with Product Family and Product metatdata from EBS. For this example, the Product Family will be Human Resources Suite and the Product will be Human Resources. Once those are selected, the window will be populated with API details.
ICSEBSCloudAdapter-CreateIntegration_3-003
d. Next to API label is a text entry field where the list of APIs can be searched by typing values in that field. This demo uses the HR_EMPLOYEE_API, which can be found by typing Employee in the text field and selecting Employee from the list:
ICSEBSCloudAdapter-CreateIntegration_3-004
e. The next section of the configuration wizard is the Operations. This will contain a list of “all” operations for the API including operations that have not yet been deployed in the EBS Integration Repository. If you select an operation and see a warning message indicating that the operation has not been deployed, you must go to the EBS console and deploy that operation in the Integration Repository and provide the appropriate grants.
ICSEBSCloudAdapter-CreateIntegration_3-005
f. This demo will use the CREATE_EMPLOYEE method of the HR_EMPLOYEE_API. Notice that there is no warning when this method is selected:
ICSEBSCloudAdapter-CreateIntegration_3-006
g. The Summary section of the configuration wizard shows all the details from the previous steps. Click on Done to complete the endpoint configuration.
ICSEBSCloudAdapter-CreateIntegration_3-007
h. Check point – the ICS integration should look something like the following:
ICSEBSCloudAdapter-CreateIntegration_3-008
4. Request/Response Mappings.
a. The mappings for this example are very straightforward in that the JSON was derived from the EBS input/output parameters, so the relationships are fairly intuitive. Also, the number of data elements have been minimized to simplify the mapping process. It is also a good idea to provide a Fault mapping:

Request Mapping:

ICSEBSCloudAdapter-CreateIntegration_4-001

Response Mapping:

ICSEBSCloudAdapter-CreateIntegration_4-002

Fault Mapping:

ICSEBSCloudAdapter-CreateIntegration_4-003
5. Set Tracking.
a. The final step to getting the ICS Integration to 100% is to Add Tracking. This is done by clikcing on the Tracking icon at the top right-hand side of the Integration window.
ICSEBSCloudAdapter-CreateIntegration_5-001
b. In the Business Identifiers For Tracking window, drag-and-drop fields that will be used for tracking purposes. These fields show up in the ICS console in the Monitoring section for the integration.
ICSEBSCloudAdapter-CreateIntegration_5-002
c. There can be up to 3 fields used for the tracking, but only one is considered the Primary.
ICSEBSCloudAdapter-CreateIntegration_5-003
6. Save (100%).
a. Once the Tracking is configured, the integration should now be at 100% and ready for activation. This is a good time to Save all the work that has been done thus far.
ICSEBSCloudAdapter-CreateIntegration_6-001

Test Integration

1. Make sure the integration is activated and you open the endpoint URL that located by clicking on the “I”nformation icon.
ICSEBSCloudAdapter-Test-001
2. Review the details of this page since it contains everything needed for the REST client that will be used for testing the integration.
ICSEBSCloudAdapter-Test-002
3. Open a REST test client and provide all the necessary details from the endpoint URL. The important details from
the page include:
Base URL: https://[ICS POD Host Name]/integration/flowapi/rest/HR_CREATE_EMPLOYEE/v01
REST Suffix: /hr/employee/create
URL For Test Client: https://[ICS POD Host Name]/integration/flowapi/rest/HR_CREATE_EMPLOYEE/v01/hr/employee/create
REST Method: POST
Content-Type application/json
JSON Payload:
{
  "CREATE_EMPLOYEE_Input": {
    "RESTHeader": {
      "Responsibility": "US_SHRMS_MANAGER",
      "RespApplication": "PER",
      "SecurityGroup": "STANDARD",
      "NLSLanguage": "AMERICAN",
      "Org_Id": "204"
    },
    "InputParameters": {
      "HireDate": "2016-01-01T09:00:00",
      "BusinessGroupID": "202",
      "LastName": "Demo",
      "Sex": "M",
      "Comments": "Create From ICS Integration",
      "DateOfBirth": "1991-07-03T09:00:00",
      "EMailAddress": "joe.demo@example.com",
      "FirstName": "Joseph",
      "Nickname": "Demo",
      "MaritalStatus": "S",
      "MiddleName": "EBS",
      "Nationality": "AM",
      "SocialSSN": "444-33-2222",
      "RegisteredDisabled": "N",
      "CountryOfBirth": "US",
      "RegionOfBirth": "Montana",
      "TownOfBirth": "Missoula"
    }
  }
}
The last piece that is needed for the REST test client is authentication information. Add Basic Authentication to the header with a user name and password for an authorized “ICS” user. The user that will be part of the on-premise EBS operation is specified in the EBS connection that was configured in ICS earlier. The following shows what all this information looks like using the Firefox RESTClient add-on:
ICSEBSCloudAdapter-Test-003
4. Before we test the integration, we can login to the EBS console as the HRMS user. Then navigating to Maintaining Employees, we can search for our user Joseph Demo by his last name. Notice, nothing comes up for the search:
ICSEBSCloudAdapter-Test-004
5. Now we send the POST from the RESTClient and review the response:
ICSEBSCloudAdapter-Test-005
6. We can compare what was returned from EBS to ICS in the EBS application. Here is the search results for the employee Joseph Demo:
ICSEBSCloudAdapter-Test-006
7. Here are the details for Joseph Demo:
ICSEBSCloudAdapter-Test-007
8. Now we return to the ICS console and navigate to the Tracking page of the Monitoring section. The integration instance shows up with the primary tracking field of Last Name: Demo
ICSEBSCloudAdapter-Test-008
9. Finally, by clicking on the tracking field for the instance, we can view the details:
ICSEBSCloudAdapter-Test-009

Hopefully this walkthrough of how to do an ICS integration to an on-premise EBS environment has been useful. I am looking forward to any comments and/or feedback you may have. Also, keep an eye out for the “Part 2” A-Team Blog that will detail EBS business events surfacing in ICS to complete the ICS/EBS on-premise round trip integration scenarios.

Eloqua ICS Integration

$
0
0

Introduction

Oracle Eloqua, part of Oracle’s Marketing Cloud suite of products, is a cloud based B2B marketing platform that helps automate the lead generation and nurture process. It enables the marketer to plan and execute marketing campaigns while delivering a personalized customer experience to prospects.

In this blog I will describe how to integrate Eloqua with other SaaS applications using Oracle’s iPaaS platform, the Integration Cloud Service(ICS).
ICS provides an intuitive web based integration designer for point and click integration between applications, a rich monitoring dashboard that provides real-time insight into the transactions, all of it running on a standards based, mature runtime platform on Oracle Cloud. ICS boasts of a large library of SaaS, Application, as well as Technology Adapters that add to its versatility.

One such adapter is the Eloqua adapter, which allows synchronizing accounts, contacts and custom objects with other applications. The Eloqua Adapter can be used in two ways in ICS:

  • As the target of an integration where external data is sent to Eloqua,
  • Or as the source of an integration where contacts(or other objects) flowing through a campaign or program canvas in Eloqua are sent out to any external application.

This blog provides a detailed functional as well as technical introduction to the Eloqua Adapter’s capabilities.
The blog is organized as follows:

  1. a. Eloqua Adapter Concepts
  2. b. Installing the ICS App in Eloqua
  3. c. Creating Eloqua connection
  4. d. Designing the Inbound->Eloqua flows
  5. e. Designing the Eloqua->Outbound flows

This blog assumes that the reader has basic familiarity with ICS as well as Eloqua.

a. Eloqua Adapter concepts

In this section we’ll go over the technical underpinnings of the ICS Eloqua adapter.

ICS Adapter is also referred to as ICS Connector, they mean the same.

The ICS adapter can be used in ICS integrations for both triggering the integration and as target(Invoke) within an integration.

When used as target :

  • The adapter can be used to create/update Account, Contact and custom objects defined within Eloqua.
  • Under the hood the adapter uses the Eloqua Bulk 2.0 APIs to import data into Eloqua. More on this later.

When used as trigger :

  • The Eloqua Adapter allows instantiating an ICS integration when a campaign or program canvas runs within Eloqua.
  • The adapter must be used in conjunction with a corresponding ‘ICS App’ installed within Eloqua.

    Installing the ICS App is mandatory for triggering ICS integrations. The next section describes the installation.

    The marketer in Eloqua uses this app as a step in his campaign, and the app in turn invokes the ICS endpoint at runtime. The image below shows a sample ICS App in use in a campaign canvas within Eloqua:

  • Screen Shot 01-19-17 at 10.19 AM

  • The Eloqua ICS App resides within the Eloqua AppCloud, and complements the ICS Eloqua Adapter such that contacts and other objects flow out from the campaign, into the ICS App and eventually to the ICS integration. The image below describes this.
  • Screen Shot 01-19-17 at 12.35 PM

b. Installing the ICS App in Eloqua

As explained above, installing the ICS App in Eloqua is mandatory for the Eloqua->Outbound scenarios.

The app is available in Oracle Marketplace, and the installation is straightforward:

  • Open the ICS App on Oracle Marketplace at https://cloud.oracle.com/marketplace/app/AppICS
  • Click ‘Get App’. Accept the terms and conditions in the popup. Click ‘Next’. This will redirect you to your Eloqua login page. Sign in, and click ‘Accept and Install’
  • Screen Shot 11-30-16 at 06.22 PM

  • The next page takes you to the ICS configuration, where you need to provide the ICS URL, username and password. Click ‘Save’.
  • Screen-Shot-11-30-16-at-06.23-PM

  • Click ‘Sign In’ on the next page, thus providing the app access to Eloqua on your behalf(OAuth2).
  • Screen Shot 11-30-16 at 06.24 PM

  • Click ‘Accept’ on the next page.
  • The ICS App is now installed and ready to use as an ‘Action’ in Eloqua Canvas.

Now we will look at creating Eloqua connections and integrations in ICS.

c. Creating Eloqua connection in ICS

  1. Log on to the ICS home page. Click on ‘Create Connections’, then ‘New Connection’ , and choose ‘Eloqua’.
  2. Name the connection appropriately.
  3. Screen Shot 01-17-17 at 10.15 PM

  4. The Connection Role can be:
    • a. Trigger, used in integrations where the connection is only used to trigger the integration.
    • b. Invoke, used in integrations where the connection is only used as target.
    • c. Or Trigger and Invoke, which can be used either way.
  5. Click ‘Create’. Click on the ‘Configure Security’ button, and enter the Eloqua Company name, username and password. Then click on ‘Test’.
  6. At this point ICS authenticates with Eloqua using the credentials provided above. The authentication process depends on the connection role:

  • a.If ‘Invoke’ role, then ICS performs an HTTP Basic Authentication to https://login.eloqua.com using base64-encoded “<company>\<username>:<password>” string. This process is described in more detail here.
  • b.If ‘Trigger’ or ‘Trigger and Invoke’ role, then along with the above test ICS also reaches out to Eloqua AppCloud and checks if the Eloqua ICS App has been installed. If not installed then the connection test will fail.
  • Once the connection test is successful, save the connection.
  • Now that the connection has been defined, we can use the Eloqua adapter in an ICS integration to sync data. Let’s take a look at designing the Inbound->Eloqua usecases, i.e. where Eloqua is the target application.

    d. Designing the Inbound->Eloqua flows

    The Eloqua adapter for inbound->Eloqua flows only relies on the Bulk 2.0 APIs and doesn’t need the ICS App to be installed in Eloqua.
    Below are the steps to configure the adapter.

    Design time:

    • Create an ICS integration, and drag the Eloqua connection on the target or as an invoke activity in an orchestration.
    • Name your endpoint and click Next.
    • On the operations page, you can choose the Eloqua business object that needs to be created/updated, as well as fields within the object. You can choose the field to be uniquely matched on, etc.
    • Screen Shot 01-19-17 at 03.06 PM

    • You can also set the Auto-Sync time interval such that periodically the Eloqua data inserted into staging area will be synced to actual Eloqua tables.
    • Finish the wizard, complete the rest of the integration, and then activate it.

    At runtime, since we know that under the hood the Bulk Import APIs are being used, the following specific events happen:

    • Depending on the business object and the fields chosen, an import definition is created by POSTing to the “/bulk/2.0/<object>/imports/” Eloqua endpoint.
    • This returns a unique URI in the response, which is used to POST the actual data to Eloqua. Thus, as data gets processed through the ICS integration, it reaches the Eloqua Invoke activity, which internally uses the URI returned above to POST the data to Eloqua. The data is now in the Eloqua staging area, ready to be synced into Eloqua.
    • Now, depending on the ‘Auto-Sync’ interval defined in design-time, periodically the ‘/bulk/2.0/syncs’ endpoint is invoked which moves the data from the staging area to Eloqua database tables.

    The Bulk API steps above are described in more detail here.

    e. Designing the Eloqua->Outbound flows

    Design time :

    • Create an ICS integration, and drag the Eloqua connection as the source of the integration.
    • Select the business object, select the fields, followed by selecting the response fields.
    • Finish the wizard. Complete the integration and activate it.

    When the integration is activated, ICS makes a callout to the Eloqua ICS App, registering the integration name, its ICS endpoint, and request and response fields chosen above.

    At this point, back in the Eloqua UI, the marketer can configure the ICS App in her campaign by choosing among the activated ICS integrations and configuring them appropriately. For example, the screenshot below shows the ICS App’s ‘cloud action’ configuration screen from a sample Eloqua campaign, after an integration called ‘eloqua_blog’ with the Eloqua Adapter as source is activated:
    Screen Shot 01-19-17 at 03.41 PM

    The Marketer now runs her campaign. Contacts start flowing through various campaign steps, including the ICS App step, at which point the ICS App gets invoked, which in turn invokes the configured ICS integration.

    Integrating Sales Cloud and Service Cloud using ICS – troubleshooting issues with security configuration

    $
    0
    0

    Introduction

    This blog talks about a few “gotchas” when integrating Oracle Sales Cloud (OSC) and Oracle Service Cloud (OSvC) using Oracle’s iPaaS platform, the Integration Cloud Service(ICS).
    The idea is to have a ready reckoner for some common issues faced, so that customers can hit the ground running when integrating between OSvC and OSc using ICS

     

    ICS Integrations for OSC OSvC

    Pre-built ICS integrations are available from Oracle for certain objects and can be downloaded from My Oracle Support. Contact Oracle Support to download the pre-built integrations and the documentation that comes along with it.   

    The pre-built integration provides out of the box standard integration for the following –

    •     Integrate Account and Contacts Objects from Sales Cloud to Service Cloud

    OSC_SVC_integrations

    •    Integrate Organization and Contact objects from Service Cloud to Sales Cloud

    SVC_OSC_integrations
    The pre-built integration is built using ICS and provides a few standard field mappings. It can serve as a template and users can update any custom field mappings as needed.
    The ICS Prebuilt integrations also serve as reference for building other custom integrations between OSC and OSvC using ICS. ICS integrations can be built for integrating more objects like Partner and Opportunity objects from OSC. Similarly flows can be created to integrate Asset and Incident objects from OSvC. Refer to the Sales cloud Adapter documentation  and OSvC Adapter documentation  here for capabilities that can be used to build Custom integrations.

     

    ICS Credential in Sales Cloud

    One issue that could be faced by users after following the steps in the PreBuilt integrations document and activating the ICS integrations, is that the Account and Contact subscriptions do not flow from OSC to ICS.
    This is usually due to issues with creating the ICS credentials in OSC.
    Note that a csfKey entry in Sales Cloud infrastructure stores the ICS credentials used by Sales Cloud. This key is used to connect to ICS and invoke the subscription based integrations at runtime.

    Refer to this excellent blog post from  my colleague Naveen Nahata, which gives simple steps to create the csf Key. The SOA Composer page where csf key and values are updated is shown below.

    001_CSF_Key

    Note that OSC ‘R12’ and ‘R11’ customers can now self create csfKey on the SOA Composer App using the steps from Naveen’s blog above.
    R10 customers however, should create a support SR for the csfKey creation. Refer to the steps as mentioned in the implementation guide document within the R10 preBuilt integrtaion download package.

    Invalid Field Errors in OSvC

    Further, when testing the integration of Contact or Account from OSC to OSvC, the ICS instances could be going to failed state.  ICS may show the instance to be in failed state as shown below.

    OSC_SVC_ACCT_Created_Error_2
    Tracking the failed instance further may show error message as seen below

     

    ErrorMessage
    If the OSC_SVC_ACCOUNT_CREATED integration is ‘TRACE ENABLED’, then the Activity Stream/Diagnostic log file can be downloaded from ICS to further inspect the message payloads flowing in the integration instance.
    If one searches the logs for request/response message payloads using the ICS instance ID that has failed, he/she may find out that the issue is not really at the createOriginalSystemReference stage of the flow but the BatchResponse stage from Service Cloud.

     Error:  Invalid Field While processing Organization->ExternalReference(string)

    The response payload from OSvC will look as below

    <nstrgmpr:Create>
    <nstrgmpr:RequestErrorFault xmlns:nstrgmpr="urn:messages.ws.rightnow.com/v1_3">
    <n1:exceptionCode xmlns:nstrgmpr="http://xmlns.oracle.com/cloud/adapter/rightnow/OrganizationCreate_REQUEST/types">INVALID_FIELD</n1:exceptionCode>
    <n1:exceptionMessage xmlns:nstrgmpr="http://xmlns.oracle.com/cloud/adapter/rightnow/OrganizationCreate_REQUEST/types">Invalid Field While processing Organization-&gt;ExternalReference(string).</n1:exceptionMessage>
    </nstrgmpr:RequestErrorFault>
    </nstrgmpr:Create>

    Solution:

    Ensure that the credentials specified in the EVENT_NOTIFICATION_MAPI_USERNAME and EVENT_NOTIFICATION_MAPI_PASWD in OSvC do not refer to a ‘real’ OSvC user. OSvC user credentials may not have the rights to update External Reference fileds. It is important  that a dummy username/password is created in the EVENT_NOTIFICATION_MAPI_* fields in OSvC. And remember to use this credential when configuring the OSvC connection in ICS.

    ICS Credential in Service Cloud

    Another crucial part of the OSvC Configuration is setting the Credentials to use for Outgoing Requests from OSvC to ICS. This is done by setting the EVENT_NOTIFICATION_SUBSCRIBER_USERNAME and EVENT_NOTIFICATION_SUBSCRIBER_PASSWD  parameters in OSvC. This credential is used by OSvC to connect and execute ICS integrations and must point to a ‘real’ user on ICS. This user should have the “Integration Cloud Service Runtime Role” granted to it.

    References:

    Using Event Handling Framework for Outbound Integration of Oracle Sales Cloud using Integration Cloud Service
    Service Cloud
    Sales Cloud

     

    Understanding the Enterprise Scheduler Service in ICS

    $
    0
    0

    Introduction

     

    In many enterprise integration scenarios there is a requirement to initiate tasks at scheduled times or at user defined intervals. The Oracle Integration Cloud Service (ICS) provides scheduling functionality via the Oracle Enterprise Scheduler to satisfy these types of requirements.  The Oracle Enterprise Scheduler Service (ESS) is primarily a Java EE application that provides time-based and schedule-based callbacks to other applications to run their jobs. Oracle ESS applications define jobs and specify when those jobs need to be executed and then gives these applications a callback at the scheduled time or when a particular event arrives. Oracle ESS does not execute the jobs itself, it generates a callback to the application and the application actually executes the job request. This implies that Oracle Enterprise Scheduler Service is not aware of the details of the job request; all the job request details are owned and managed by the application.

     

    What follows will be a discussion as to how ICS utilizes the ESS feature.  The document will cover how the ESS threads are allocated and the internal preparation completed for file processing.

     

    Quick ICS Overview

     

    The Integration Cloud Service deployment topology consists of one cluster.  The cluster has two managed servers along with one administration server.  This bit of information is relevant to the discussion of how the Enterprise Scheduler Service works and how it is used by applications like an ICS flow that runs in a clustered HA environment.

    A common use case for leveraging ESS is to setup a schedule to poll for files on an FTP server at regular intervals.  At the time files are found and then selected for processing, the ESS does some internal scheduling of these files to ensure the managed servers are not overloaded.  Understanding how this file processing works and how throttling might be applied automatically is valuable information as you take advantage of this ICS feature.

    An integration can be scheduled using the ICS scheduling user interface (UI). The UI provides a basic and an advanced option.  The basic option provides UI controls to schedule when to execute the integration.

    schedulingBasic

     

     

    The advanced option allows one to enter an iCal expression for the scheduling of the integration.

    schedulingView

     

     

    The ESS allows for two jobs to be executed at a time per JVM.  This equates to a maximum of four files being processed concurrently in a 2 instance ICS cluster.  So how does ICS process these files, especially, if multiple integrations could pick up twenty-five files at a time?

    As previously stated, there are two asynchronous worker resources per managed server. These asynchronous worker resources are known as an ICSFlowJob or AsyncBatchWorkerJob.   At the scheduled time, the ESS reserves one of the asynchronous worker resources, if one is available.  The initial asynchronous worker is the ICSFlowJob.  This is what we call the parent job.

    It is important to digress at this point to make mention of the database that backs the ICS product. The ICS product has a backing store of an Oracle database.  This database hosts the metadata for the ICS integrations, BPEL instances that are created during the execution of orchestration flows, and the AsyncBatchWorker metadata.  There is no need for the customer to maintain this database – no purging, tuning, or sizing.  ICS will only keep three days of BPEL instances in the database.  The purging is automatically handled by the ICS infrastructure.

    The ICSFlowJob invokes the static ScheduledProcessFlow BPEL. This process does the file listing, creates batches with one file per batch, and submits AsyncBatchWorker jobs to process the files. The AsyncBatchWorker jobs are stored within a database table.  These worker jobs will eventually be picked up by one of the two threads available to execute on the JVM. The graphic below demonstrates the parent and subprocess flows that have successfully completed.

    emcc

     

    Each scheduled integration will have at most ten batch workers (AsyncWorkerJob) created and stored within the database table.  The batch workers will have one or more batches assigned.  A batch is equivalent to one file. After the batch workers are submitted, the asynchronous worker resource, held by the ICSFlowJob, is released so it can be used by other requests.

    Scenario One

    1. 1.One integration that is scheduled to run every 10 minutes
    2. 2.Ten files are placed on the FTP server location all with the same timestamp

    At the scheduled time, the ICSFlowJob request is assigned one of the four available threads (from two JVMs) to begin the process of file listing and assigning the batches.  In the database there will be ten rows stored, since there are ten files.  Each row will reference a file for processing.  These batches will be processed at a maximum of four at a time.  Recall that there are only two threads per JVM for processing batches.

    At the conclusion of processing all of the AsyncWorkerJob subrequests one of the batch processing threads notifies the parent request, ICSFlowJob, that all of the subrequests have completed.

     

    Scenario Two

    1. 1. Two Integrations are scheduled to run every 10 minutes
    2. 2. There are 25 files, per integration, at each integration’s specified FTP server location

    This scenario will behave just as in scenario one; however, since each integration has more than ten files to process, the subrequests, AsyncWorkerJob, must each process more than one file.  Each integration will assign and schedule the file processing as follows:

    5 AsyncWorkerJob subrequests will process 2 files each

    5 AsyncWorkerJob subrequests will process 3 files each

    At the conclusion of the assignment and scheduling of the AsyncWorkerJob subrequests  there will be 20 rows in the database; 10 rows per integration.

    The execution of the AsyncWorkJobs is based upon a first-come first-serve basis.  Therefore, the 20 subrequests will more than likely be interleaved between each other and the processing of all of the files will take longer than if the integrations had not been kicked off at the same time.  The number of scheduler threads to process the batch integrations does not change.  There will only be a maximum of two scheduler threads per JVM.

     

    Summary

    The ESS scheduler provides useful features for having batch processes kicked off at scheduled intervals.  These intervals are user defined providing great flexibility as to when to execute these batch jobs.  However, care must be taken to prevent batch jobs from consuming all of the system resources, especially when there are real-time integrations being run through this same system.

     

    The Integration Cloud Service has a built in feature to prevent batch jobs from overwhelming the service.  This is done by only allowing two scheduler threads to process files at a time per JVM.   This may mean that some batch integrations take longer; however, it prevents the system from being overwhelmed and negatively impacting other ICS flows not related to batch file processing with the ESS.

     

    As we have discussed in this article, the use case here is all about polling for files that ICS needs to process at specified times or intervals; however, the ESS may also be used to trigger integrations such as REST and SOAP-based web services.  When using ESS to initiate file processing, the system is limited to two scheduler threads per JVM.

     

    The polling approach may not always be the best approach, since the delivery of the file may not be on a regularly scheduled cycle.  When the delivery of the file, from the source, is not on a regular schedule then it is probably better to implement a push model.   In a coming blog, I will demonstrate how to implement the push model.  With the push model the system is no longer under the constraints of the two scheduler threads per JVM.

     

    To learn more about the Enterprise Service Scheduler one should reference the Oracle documentation.

     

    ICS Connectivity Agent – Update Credentials

    $
    0
    0

    When installing the Connectivity Agent, there are several mandatory command-line arguments that include a valid ICS username (-u=[username]) and password (-p=[password]). These arguments are used to verify connectivity with ICS during installation and are also stored (agent credentials store) for later use by the agent server. The purpose of storing them is to allow the running agent to make a heartbeat call to ICS. This heartbeat is used to provide status in the ICS console regarding the state of the Connectivity Agent. This blog will detail some situations/behaviors relating to the heartbeat that cause confusion when the ICS console contradicts observations on the Connectivity Agent machine.

    Confusing Behaviors/Observations

    The following is a real-world series of events that occurred for an ICS subscriber. Their agent had been started and running for quite a while. The ICS console was used to monitor the health of the agent (i.e., the green icon which indicates the agent is running). Then out of the blue, the console suddenly showed the agent was down (i.e., the red icon):

    AgentCredUpdate-01

    The obvious next step was to check on the agent machine to make sure the agent was running. When looking through the standard out that was being captured, it shows that the agent was in fact still running:

    AgentCredUpdate-02

    Further investigation showed that the agent server logs did not indicate any problems. In an attempt to resolve this strange scenario, the agent server was bounced … but it failed to start with the following:

    AgentCredUpdate-03

    Although the -u and -p command-line parameters contained the correct credentials, the startAgent.sh indicated an error code of 401 (i.e., unauthorized). This error was very perplexing since the agent had been started earlier with the same command-line arguments. After leaving the agent server down for a while, another start was kicked off to demonstrate the 401 problem. Interestingly enough, this time the agent started successfully and went to a running state. However, the ICS console was still showing that the agent was down with no indication of problems on the Connectivity Agent machine. Another attempt was made to bounce the agent server and it again failed to start with a 401.

    At this point, the diagnostic logs were downloaded from the ICS console to see if there was any indication of problems on the ICS side. When analyzing the AdminServer-diagnostic.log, it showed many HTTP authentication/authorization failure messages:

    AgentCredUpdate-04

    At this point it was determined that the password for the ICS user associated with the Connectivity Agent had been changed without notifying the person responsible for managing the agent server. The series of odd behaviors were all tied to the heartbeat. When the ICS user password was changed, the running agent still had the old password. It was the repeated heartbeat calls with invalid credentials that caused the user account to be locked out in ICS. When a user account is locked, it is not accessible for approximately 30 minutes.

    This account locking scenario explained why the agent server could be started successfully and then fail with the 401 within a short period of time. When the account was not locked, the startAgent.sh script would successfully call ICS using the credentials from the command-line. Then the server would start and use the incorrect credentials from the credentials store for the heartbeat, thus locking the user account which caused the problem to repeat itself.

    The Fix

    To fix this issue, a WLST script (updateicscredentials.py) has been provided that will update the Connectivity Agent credentials store. The details on running it can be found in the comments at the top of the script:

    AgentCredUpdate-05

    When executing this script, it is important to make sure the agent server is running. Once the script is done you should see something like the following:

    AgentCredUpdate-06

    At this point, stop the agent server and wait 30 minutes to allow the user account to be unlocked before restarting the server. Everything should now be back to normal:

    AgentCredUpdate-07

    Possible Options For Less Than 30 Minute Waiting Period

    Although I have not yet had an opportunity to test the following out, in theory it should work. To avoid the 30 minute lockout period on ICS due to the Connectivity Agent heartbeat:

    1. Change the credentials on the Connectivity Agent server.
    2. Shutdown the Connectivity Agent server.
    3. Access the Oracle CLOUD My Services console and Reset Password / Unlock Account with the password just used for the agent:

    AgentCredUpdate-08

    4. Verify that the user can login to the ICS console (i.e., that the account is unlocked).
    5. Start the Connectivity Agent and allow the server to get to running state.
    6. Verify that “all is green” in the ICS console.

    Using Oracle Managed File Transfer (MFT) to Push Files to ICS for Processing

    $
    0
    0

    Introduction

    In a previous article I discussed the use of the Enterprise Scheduler Service (ESS) to poll for files, on a scheduled basis, to read from MFT.  In that article we discussed how to process many files that have been posted to the SFTP server.  At the end of that article I mentioned the use of the push pattern for file processing.

    This article will cover how to implement that push pattern with Managed-File Transfer (MFT) and the Integration Cloud Service (ICS).  We’ll walk through the configuration of MFT, creating the connections in ICS, and developing the integration in ICS.

    The following figure is a high-level diagram of this file-based integration using MFT, ICS, and an Oracle SaaS application.

    mft2ics

     

    Create the Integration Cloud Service Flow

    This integration will be a basic integration with an orchestrated flow.  The purpose is to demonstrate how the integration is invoked and the processing of the message as it enters the ICS application.  For this implementation we only need to create two endpoints.  The first is a SOAP connection that MFT will invoke, and the second connection will be to the MFT to write the file to an output directory.

    The flow could include other endpoints but for this discussion additional endpoints will not add any benefits to understanding the push model.

    Create the Connections

    The first thing to do is the create the connections to the endpoints required for the integration.  For this integration we will create two required connections.

     

    1. SOAP connection.  This connection is what will be used by the MFT to trigger the integration as soon as the file arrives in the specified directory within the MFT (This will be covered in the MFT section of this article).
      1. SFTP connection: This connection will be used to write the file to an output directory within the FTP server.  This second connection is only to demonstrate the flow and the processing of the file and then writing the file to an endpoint.  This endpoint could have been any endpoint, to invoke another operation.  For instance, we could have used the input file to invoke a REST, SOAP, or one of many other endpoints.

    Let’s define the SOAP connection.

    SOAP_Identifier

    Figure 1

    Identifier: Provide a name for the connection

    Adapter: When selecting the adapter type choose the SOAP Adapter

    Connection Role: There are three choices for the connection role; Trigger, Invoke, and Trigger and Invoke.  We will use a role of Trigger, since the MFT will be triggering our integration.

    SOAPConnectionProperties

    Figure 2

    Figure 2 shows the properties that define the endpoint.  The WSDL URL may be added by specifying the actual WSDL as shown above, or the WSDL can be consumed by specifying the host:port/uri/?WSDL.

    In this connection the WSDL was retrieved from the MFT embedded server.  This can be found at $MW_HOME/mft/integration/wsdl/MFTSOAService.wsdl.

    The suppression of the time stamp is specified as true, since the policy being used at MFT does not require the time stamp to be passed.

    Security Policy

    basic_authentication_security

     

     

    Figure 3

    For this scenario we will be using the Basic Authentication token policy.  The policy specified on this connection needs to match the policy that is specified for the MFT SOAP invocation.

    The second connection, as mentioned previously, is for the purpose of demonstrating an end-to-end flow.  This connection is not important for the purpose of demonstrating the push pattern.  The connection is a connection back to the MFT server.

    MFT_FTP_Identifier

    Figure 4

    Identifier: Provide a unique name for the connection

    Adapter: When selecting the adapter type choose the FTP Adapter

    Connection Role: For this connection we will specify “Trigger and Invoke”.

    Connection Properties

    MFT_FTP_Connection_Properties

    Figure 5

    FTP Server Host Address:  The IP address of the FTP server.

    FTP Server Port: The listening port of the FTP Server

    SFTP Connection:  Specify “Yes”, since the invocation will be over sFTP

    FTP Server Time Zone: The time zone where the FTP server is located.

    Security Policy

    MFT_FTP_Security

    Figure 6

    Security Policy:  FTP Server Access Policy

    User Name:  The name of the user that has been created in the MFT environment.

    Password: The password for the specified user.

    As a side note, it is recommended to use the host key for sftp connectivity.  For this sample, it is not important for the purpose of this demonstration.  In order to better understand the use of the host key implementation refer to this blog.

    Create the Integration

    Now that the connections have been created we can begin to create the integration flow.  When the flow is triggered by the MFT SOAP request the file will be passed by reference.  The file contents are not passed, but rather a reference to the file is passed in the SOAP request.  When the integration is triggered the first step is to capture the size of the file.  The file size is used to determine the path to traverse through the flow.  A file size of greater than one megabyte is the determining factor.

    integration

     

    Figure 7

    The selected path is determined by the incoming file size.  When MFT passes the file reference it also passes the size of the file.  We can then use this file size to determine the path to take.  Why do we want to do this?

    If the file is of significant size then reading the entire file into memory could cause an out-of-memory condition.  Keep in mind that memory requirements are not just about reading the file but also the XML objects that are created and the supporting objects needed to complete any required transformations.

    ICS product provides a feature to prevent an OOM condition when reading large files.  The top path shown in Figure 7 demonstrates how to handle the processing of large files.  When processing a file of significant size it is best to process the file by downloading the file to ICS (This is an option provided by the FTP adapter when configuring the work flow). After downloading the file to ICS it is processed by using a “stage” action.  The stage action is able to chunk the large file and read the file across multiple threads.  This article will not provide an in-depth discussion on the stage action.  To better understand the “stage” action, refer to the Oracle ICS documentation.

    The “otherwise” path is the execution flow above is taken when the file size is less than the configured maximum file size.  For the scenario in this blog, I set the maximum size to one megabyte.

    The use case being demonstrated involves passing the file by reference.  Therefore, in order to read or download the file we must obtain the reference location from MFT.  The incoming request provides the reference location.  We must provide this reference location and the target filename to the read or download operation.  This is done with the XSLT mapping shown in figure 8.

    FileReferenceMapping

    Figure 8

    The result mapping is shown in Figure 9.

    MappingPage

    Figure 9

     

    The mapping of the fields is provided below.

    Headers.SOAPHeaders.MFTHeader.TargetFilename -> DownFileToICS.DownloadRequest.filename.

    Substring-before(

    substring-after(InboundSOAPRequestDocument.Body.MFTServiceInput.FTPReference.URL,’7522’),

    InboundSOAPRequestDocument.Headers.SOAPHeaders.MFTHeader.TargetFilename) -> DownloadFileToICS.DownloadRequest.directory

    Since the scenario is doing a pass-by-reference, MFT will pass the location of the file as something similar to the following: sftp://<hostname>:7522/payloads/ref/172/52/<filename>.  The location being passed is not the location of the directory where the file was placed by the source system.  Since the reference directory is determined by MFT, the name of the directory must be derived as demonstrated by the XSLT mapping shown above.

    As previously stated, this is a basic scenario intended to demonstrate the push process.  The integration flow may be as simple or complex as necessary to satisfy your specific use case.

    Configuring MFT

    Now that the integration has been completed it is time to implement the MFT transfer and configure the SOAP request for the callout.  We will first configure the MFT Source.

    Create the Source

    The source specifies the location of the incoming file.  For our scenario the directory we place our file in will be /users/zern/in.  The directory location is your choice but it must be relative to the embedded FTP server and one must have permissions to read from that directory.  Figure 10 shows the configuration for the MFT Source.

    MFT_Source

    Figure 10

    As soon as the file is placed in the directory an “event” is triggered for the MFT target to perform the specified action.

    Create the Target

    The MFT target specifies the endpoint of the service to invoke.  In figure 11, the URL has been specified to the ICS integration that was implemented above.

    MFT_Target_Location

     

    Figure 11

    The next step to specify is the security policy.  This policy must match what was specified by the connection defined in the ICS platform.  We are specifying the username_token_over_ssl_policy as seen in Figure 12.

    MFT_Target_Policy

     

    Figure 12

    Besides specifying the security policy we must also specify to ignore the timestamp in the response. Since the policy is the username_token policy the request  must also specify the credentials in the request.  The credentials are retrieved from the keystore by providing the csf-key value.

    Create the Transfer

    The last step in this process is to bring the source and target together which is the transfer.  It is within the transfer configuration that we specify the delivery preferences.  In this example we set the “Delivery Method” to “Reference” and the Reference Type to be “sFTP”.

     

    MFT_Transfer_Overview

    Figure 13

    Putting it all together

    1. A “*.csv” file is dropped at the source location, /users/zern/in.
    2. MFT invokes the ICS integration via a SOAP request.
    3. The integration is triggered.
    4. The integration determines the size of the incoming file and determines the path of execution
    5. The file is either downloaded to ICS or read into memory.  This is determined by the path of execution.
    6. The file is transformed and then written back to the output directory specified by the FTP write operation.
    7. The integration is completed.

    Push versus Polling

    There is no right or wrong when choosing either a push or poll pattern.  Each pattern has its benefits.  I’ve listed a couple of points to consider for each pattern.

    Push Pattern

    1. The file gets processed as soon as it arrives in the input directory.
    2. You need to create two connections; one SOAP connection and one FTP connection.
    3. Normally used to process only one file.
    4. The files can arrive at any time and there is no need to setup a schedule.

    Polling Pattern

    1. You must create a schedule to consume the file(s).  The polling schedule can be at either specific intervals or at a given time.
    2. You only create one connection for the file consumption.
    3. Many files can be placed in the input directory and the scheduler will make sure each file is consumed by the integration flow.
    4. The file processing is delayed upwards to the maximum time of the polling schedule.

    Summary

    Oracle offers many SaaS cloud applications such as Fusion ERP and several of these SaaS solutions provide file-based interfaces.  These products require the input files to be in a specific format for each interface.  The Integration Cloud Service is an integration gateway that can enrich and/or transform these files and then pass them along directly to an application or an intermediate storage location like UCM where the file is staged as input to SaaS applications like Fusion ERP HCM.

    With potentially many source systems interacting with Oracle SaaS applications it is beneficial to provide a set of common patterns to enable successful integrations.  The Integration Cloud Service offers a wide range of features, functionality, and flexibility and is instrumental in assisting with the implementation of these common patterns.

     

    Connecting ICS and Apache Kafka via REST Proxy API

    $
    0
    0

    Introduction

    Apache Kafka (Kafka for short) is a proven and well known technology for a variety of reasons. First it is very scalable and has the capability of handling hundreds of thousands of messages per second without the need of expensive hardware; and close to zero fine tuning, as you can read here. But another reason is due its client API capabilities. Kafka allows connections from different platforms, by leveraging a number of client APIs that make it easy for developers to connect to and transact with Kafka. Being able to easily connect to a technology is a major requirement for open-source projects.

    In nutshell, Kafka clients APIs are divided into three categories:

    * Native Clients: This is the preferred way to develop client applications that must connect to Kafka. These APIs allow high-performance connectivity and leverage most of the features found in Kafka’s clustering protocol. By using this API, developers are responsible for writing code to handle aspects like fault-tolerance, offset management, etc. An example of this is the Oracle Service Bus Transport for Kafka has been built using the native clients, which can be found here.

    * Connect API: SDK that allows the creation of reusable clients, which run on top of a pre-built connector infrastructure that takes care of details such as fault-tolerance, execution runtime and offset management. The Oracle GoldenGate adapter has been built on top of this SDK, as you can read here

    * Rest Proxy API: For all those applications that for some reason can neither use the native clients nor the connect API, there is an option to connect to Kafka using the REST Proxy API. This is an open-source project maintained by Confluent, the company behind Kafka that allows REST-based calls against Kafka, to perform transactions and administrative tasks. You can read more about this project here.

    The objective of this blog is to detail how Kafka’s REST Proxy API can be used to allow connectivity from Oracle ICS (Integration Cloud Service). By leveraging the native REST adapter from ICS, it is possible to implement integration scenarios in which messages can be sent to Kafka. This blog is going to show the technical details about the REST Proxy API infrastructure and how to implement a use case on top of it.

    Use_Case_Diagram

    Figure 1: Use case where a request is made using SOAP and ICS delivers it to Kafka.

    The use case is about leveraging ICS transformation capabilities to allow applications limited to the SOAP protocol to be able to send messages to Kafka. Maybe there are some applications out there that have no REST support, and can only interact with SOAP-based endpoints. In this pattern, ICS can be used to adapt and transform the message so it could be properly delivered to Kafka. SOAP is just an example; it could be the case of any other protocol/technology supported by ICS. Plus, any Oracle SaaS application that has built-in connectivity with ICS can also benefit from this pattern.

    Getting Started with the REST Proxy API

    As mentioned before, the REST Proxy API is an open-source project maintained by Confluent. Its source-code can be found on GitHub, here. So please be aware that the REST Proxy API will not be part of any Kafka deployment by default. That means that if you download and install a community version of Kafka, the bits for the REST Proxy API will not be there. You need to explicitly build the project and integrate with your Kafka install. This can be a little tricky since the REST Proxy API project depends on other projects such as commons, rest-utils and the schema-registry.

    Luckily; the Confluent folks provide an open-source version of their product, which has everything pre-integrated including the REST API Proxy and the other dependencies. This distribution is called Confluent Open Source and can be downloaded here. It is strongly recommended to start using this distribution, so you can be sure that you face no errors that might be the result of bad compilation/building/packaging. Oracle’s own distribution of Kafka called Event Hub Cloud Service could be used as well.

    Once you have a Kafka install that has the REST Proxy API, you will be good to go. Everything was built to be O.O.T.B through easy-to-use scripts. The only thing you have to keep in mind is about the services dependencies. In a typical Kafka deployment, the brokers depend on the Zookeeper service that has to be continuously up and running. Zookeeper is required to keep metadata information about the brokers, partitions and topics in a highly available fashion. Zookeeper’s default port is 2181.

    The services from the REST Proxy API also depend on Zookeeper. To have a REST Proxy API deployment, you need a service called the REST Server. The REST Server depends on Zookeeper. Also, the REST Server depends on another service called Schema Registry – which in turn depends on Zookeeper as well. Figure 2 summarizes this dependency relationship between the services.

    Services_Depedencies

    Figure 2: Dependency relationship between the REST Proxy API services.

    Although may look like, but none of these services can become a SPOF (Single Point of Failure) or SPOB (Single Point of Bottleneck) in Kafka’s architecture. All of them were designed from scratch to be idempotent and stateless. Therefore, you can have multiple copies of each service running behind a load balancer to meet your performance and availability goals. In order to start a Kafka deployment with the REST Proxy API; you need to execute the following scripts, in the order shown on listing 1.

    /bin/zookeeper-server-start /etc/kafka/zookeeper.properties &

    /bin/kafka-server-start /etc/kafka/server.properties &

    /bin/schema-registry-start /etc/schema-registry/schema-registry.properties &

    /bin/kafka-rest-start /etc/kafka-rest/kafka-rest.properties &

    Listing 1: Starting a Kafka deployment with the REST Proxy API.

    As you can see on listing 1, every script references a properties configuration file. These files are used to customize the behavior of a given service. Most properties on these files has been preset to meet a variety of workloads; so unless you are trying to fine tune a given service – most likely you won’t need to change them.

    There is an exception though. For most production environments you will run these services on different boxes for high availability purposes. However; if you choose to run them within the same box, you might need to adjust some ports to avoid conflicts. That can be easily accomplished by editing the respective properties file and adjusting the corresponding property. If you are unsure about which property to change, consult the configuration properties documentation here.

    Setting Up a Public Load Balancer

    This section might be considered optional depending on the situation. In order for ICS to connect to the REST Proxy API, it needs to have network access to the endpoints exposed by the REST Server. This happens because ICS runs on the OPC (Oracle Public Cloud) and can only access endpoints that are publicly available on the internet (or endpoints exposed through the connectivity agent). Therefore, you may need to set up a load balancer in front of your REST Servers to allow for this connection. This should be considered a best practice because otherwise, you would need to setup firewall rules to allow public internet access to the boxes that holds your REST Servers. Moreover, running without a load balancer would make difficult to transparently change your infrastructure if you need to scale up/down your REST servers. This blog will show how to set up OTD (Oracle Traffic Director) in front of the REST servers but any other load balancer that supports TCP/HTTP would also suit the needs.

    In OTD, the first step would be creating a server pool that has all the exposed REST Server endpoints. In the setup built for this blog, I had a REST Server running over the port 6666. Figure 3 shows an example of server pool named rest-proxy-pool.

    OTD_Config_1

    Figure 3: Creating a server pool that references the REST Server services.

    The second step is the creation of a route under your virtual server configuration that will forward any request that matches a certain pattern to the server pool created above. In the REST Proxy API, any request that intends to perform a transaction (which could be either to produce or consume messages) goes through a URI pattern that starts with /topics/*. Therefore; create a route that uses this pattern, as shown on figure 4.

    OTD_Config_2

    Figure 4: Creating a route to forward requests to the server pool.

    Finally, you need to make sure that a functional HTTP listener is associated with your virtual server. This HTTP listener will be used by ICS when it sends messages out. In the setup built for this blog, I have used a HTTP listener on top of the port 8080 for non-SSL requests. Figure 5 depicts this.

    OTD_Config_3

    Figure 5: HTTP listener created to allow external communication.

    Before moving to the following sections; it would be a good idea to validate the setup built so far, since there are a lot of moving parts that can fail. The best way to validate this is by sending a message to a topic using the REST Proxy API and checking if that message is received using Kafka’s console consumer. Thus, start a new console consumer instance to listen for messages sent to the topic orders as shown in listing 2.

    /bin/kafka-console-consumer –bootstrap-server <BROKER_ADDRESS>:9092 –topic orders

    Listing 2: Starting a new console consumer that listens for messages.

    Then, send a message out using the REST Proxy API exposed by your load balancer. Remember that the request should pass through the HTTP listener configured on OTD. Listing 3 shows a cURL example that sends a simple message to the topic using the infrastructure built so far.

    curl -X POST -H “Content-Type: application/vnd.kafka.json.v1+json” –data ‘{“records”:[{“key”:”12345″, “value”:{“message”:”Hello World”}}]}’ “http://<OTD_ADDRESS>:8080/topics/orders”

    Listing 3: HTTP POST to send a message to the topic using the REST Proxy API.

    If everything was setup correctly, you should see the JSON payload sent to the topic in the output of the console consumer started on listing 2. There are some interesting things to comment about the example shown in listing 3. Firstly, you may have noticed that actual payload sent has a strictly defined structure. It is a JSON payload with only one root element called “records”. This element’s value is an array with multiple entries of type key/value. This means that you can send multiple records in once with a single request to maximize throughput, since you can avoid having to perform multiple network calls.

    Secondly, the “key” field is not mandatory. If you send a record containing only the value, that will work as well. However, it is highly recommended to use a key every time you send a message out. That will give you more control over how the messages will be grouped together within the partitions in Kafka, therefore considerably improving the partition persistence/replication over the cluster.

    Thirdly, you may also have noticed the content type header used in the cURL command. Instead of using a simple application/json as most applications would use, we used application/vnd.kafka.json.v1+json. This is a requirement for the REST Proxy API to work. Keep this in mind while developing flows in ICS.

    Message Design for REST Proxy API

    Now it is time to start thinking about how are we going to map the SOAP messages sent to ICS into the JSON payload that needs to be sent to the REST Proxy API. This exercise is important because once you start using ICS to build the flow, it will ask for payload samples and message schemas that you may not have in first hand. Therefore, this section will focus on generating these artifacts.

    Let’s start by designing the SOAP messages. In this use case we are going to have ICS receiving order confirmation requests. Each order confirmation request will contain the details of an order made by a certain customer. Listing 4 shows an example of this SOAP message.

    <soapenv:Envelope xmlns:blog=”http://cloud.oracle.com/paas/ics/blogs”

       xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”>

       <soapenv:Body>

          <blog:confirmOrder>

             <order>

                <orderId>PO000897</orderId>

                <custId>C00803</custId>

                <dateTime>2017-02-09:11:06:35</dateTime>

                <amount>89.90</amount>

             </order>

          </blog:confirmOrder>

       </soapenv:Body>

    </soapenv:Envelope>

    Listing 4: SOAP message containing the order confirmation request.

    In order to build the SOAP message shown in listing 4, it is necessary to have the corresponding message schemas typically found in a WSDL document. You can download the WSDL used to build this blog here. It will be necessary when we setup the connection in ICS later.

    The message that we really want to send to Kafka is in the JSON format. It has essentially all the fields shown on listing 4, except for the “orderId” field. Listing 5 shows the JSON message we need to send.

       “custId”:”C00803″,

       “dateTime”:”2017-02-09:11:06:35″,

       “amount”:89.90

    }

    Listing 5: JSON message containing the order confirmation request.

    The “orderId” field was omitted on purpose. We are going to use this field as the key for the record that will be sent to Kafka. By using this design we can provide a way to track orders by their identifiers. If you recall to the JSON payload shown in listing 3, you will figure that the JSON payload shown in listing 5 will be the portion used in the “value” field. Listing 6 shows the concrete payload that needs to be built so the REST Proxy API can properly process the payload.

    {  
       “records”:[  
          {  
             “key”:”PO000897″,
             “value”: {  
                “custId”:”C00803″,
                “dateTime”:”2017-02-09:11:06:35″,
                “amount”:89.90
             }
          }
       ]
    }

    Listing 6: JSON payload used to process messages using the REST Proxy API.

    Keep in mind that although the REST Proxy API will receive the payload shown on listing 6, what the topic consumers will effectively receive is only the record containing the key and the value set. When the consumer reads the “value” field of the record, it will have access to the actual payload containing the order confirmation request. Figure 6 shows the mapping that needs to be implemented in ICS.

    Abstract_Source_Target_Mapping

    Figure 6: Message mapping to be implemented in ICS.

    Developing the Integration Flow

    Now that we passed by the configuration necessary to establish communication with the REST Proxy API; we can start the development of the integration flow in ICS. Let’s start with the configuration of the connections.

    Create an SOAP-based connection as shown in figure 7. Since this connection will be used for inbound, you can skip any configuration about security. Go ahead and attach the WSDL that contains the schemas into this newly created connection.

    Creating_SOAP_Conn_2

    Figure 7: SOAP-based connection used for inbound processing.

    Next, create a REST-based connection as shown in figure 8. This is the connection that will be used to send messages out to Kafka. Therefore, make sure to set in the “REST API Base URL” field the correct endpoint that should point to your load balancer. Also make sure to set the /topics resource after the port.

    Creating_REST_Conn_2

    Figure 8: REST-based connection used for outbound processing.

    With the inbound and outbound connections properly created, go ahead and create a new integration. For this use case we are going to use Basic Map Data as integration style/pattern, although you could also leverage the outbound connection to REST Proxy API in orchestration -based integrations.

    Creating_Flow_1

    Figure 9: Using Basic Map Data as the integration style for the use case.

    Name the integration as OrderService and provide some description, as shown in figure 10. Once the integration flow is created, go ahead and drag the SOAP connection to the source area of the flow. That will trigger the SOAP endpoint creation wizard. Go through the wizard details until you reach the last page. Accept all values suggested by default. Then, click on the “Done” button to finish it.

    Creating_Flow_2

    Figure 10: Setting up the details for the newly created integration.

    ICS will create the source mapping according to the information gathered from the wizard; along with the information from the WSDL attached to the connection, as shown in figure 11. Up to this point, we can now drag the REST connection to the target area of the flow. That will trigger the REST endpoint creation wizard.

    Creating_Flow_6

    Figure 11: Integration flow with the inbound mapping built.

    Differently from the SOAP endpoint creation wizard; we will make some changes in the options shown in the REST endpoint creation wizard. The first one is setting the Kafka topic name in the “Relative Source URI” field. This is important because ICS will use this information to build the final URI that will be sent to the REST Proxy API. Therefore, make sure to set the appropriate topic name. For this use case, we are using a topic named orders, as shown in figure 12. Also, please select the option “Configure Request Payload” before clicking next.

    Creating_Flow_7

    Figure 12: Setting up details about the REST endpoint behavior.

    In the next page, you will need to associate the schema that will be used to parse the request payload. Select the “JSON Sample” and upload a JSON sample file that contains a payload like the one shown in listing 6. Please make sure to provide a JSON sample that has at least two sample values in the array section. ICS validates if the samples provided has enough information that can be used to generate the internal schemas. If some JSON sample has an array construct then ICS will ask for at least two values within the array, to make sure that it is going to deal with a list of values instead of a single value. You can grab a copy of a valid JSON sample for this use case here.

    Creating_Flow_8

    Figure 13: Setting up details about schemas and media types.

    In the “Type of Payload” section, make sure to select the “Other Media Type” option to allow the usage of custom media types. Then; set application/vnd.kafka.json.v1+json as the value, as shown in figure 13. Click next and review the options set. If everything looks like what is shown in figure 14 then you can click the “Done” button to finish the wizard.

    Creating_Flow_9

    Figure 14: Summary page of the REST endpoint creation wizard.

    ICS will bring together the request and response mappings and expects that you set them up. Thus, go ahead and create the mappings for both request and response. For the request mapping, you should simply associate the fields as shown in figure 15. Remember that this field mapping should mimic what we had shown before in figure 6, including the usage of the “orderId” field as the record key.

    Creating_Flow_11

    Figure 15: Request mapping configuration.

    The response mapping is way simpler: the only thing you have to do is associating the “orderId” field to the “confirmationId” field. The idea here is providing the user a way to know if the transaction was 100% successful or not. By returning the same order identifier value provided we will be doing this because otherwise; if any failure happens during the message transmission then the REST Proxy API will make sure to propagate an fault back to the caller, which in turn will force ICS to simply catch this fault and propagated it back. Figure 16 shows the response mapping.

    Creating_Flow_12

    Figure 16: Response mapping configuration.

    Now set up some tracking fields (For this use case using the “orderId” field would be a good idea) and finish the integration flow as shown in figure 17. Now you are ready to activate and test the integration to check the end-to-end behavior of the use case.

    Creating_Flow_13

    Figure 17: Integration flow 100% complete in ICS.

    You can download a copy of this use case here. Once the integration is active, you can validate if it is working correctly by starting a console consumer like shown in listing 2. Then, open your favorite SOAP client utility and import the WSDL from the integration. You can easily access the integration’s WSDL in the UI; by clicking in the information icon of the integration, just like shown in figure 18.

    Creating_Flow_14

    Figure 18: Retrieving the integration’s WSDL from the UI.

    Once the WSDL is properly imported in your SOAP client utility, send a payload request like the one shown in listing 4 to validate the integration. If everything was setup correctly, you should see the JSON payload sent to the topic in the output of the console consumer started on listing 2.

    Conclusion

    This blog has shown in details how to configure ICS to send messages to Kafka. Since ICS has no built-in adapter for Kafka, it was used the REST Proxy API project that is part of the Kafka ecosystem.

    Integrating with Taleo Enterprise Edition using Integration Cloud Service (ICS)

    $
    0
    0

    17:39:20Introduction

    Oracle Taleo provides talent management functions as Software as a service (SaaS). Taleo often needs to be integrated with other human resource systems. In this post, let’s look at few integration patterns for Taleo and implementing a recommended pattern using Integration Cloud Service (ICS), a cloud-based integration platform (iPaaS).

    Main Article

    Oracle Taleo is offered in Enterprise and Business editions.  Both are SaaS applications that often need to be integrated with other enterprise systems, on-premise or on the cloud. Here are the integration capabilities of Taleo editions:

    • Taleo Business Edition offers integration via SOAP and REST interfaces.
    • Taleo Enterprise Edition offers integration via SOAP services and Taleo Connect Client (TCC).

    Integrating with Taleo Business Edition can be achieved with SOAP or REST adapters in ICS, using a simple “Basic Map Data” pattern. Integrating with Taleo Enterprise Edition, however, deserves a closer look and consideration of alterative patterns. Taleo Enterprise provides three ways to integrate, each with its own merits.

    Integration using Taleo Connect Client(TCC) is recommended for bulk integration. We’ll also address SOAP integration for sake of completeness. To jump to a sub-section directly, click one of the links below.


    Taleo SOAP web services
    Taleo Connect Client (TCC)
    Integrating Taleo with EBS using ICS and TCC
    Launching TCC client through a SOAP interface


    Taleo SOAP web services

    Taleo SOAP web services provide synchronous integration. Web services update the system immediately. However, there are restrictive metered-limits to number of invocations and number of records per invocation, in order to minimize impact to live application. These limits might necessitate several web service invocations to finish a job that might need only one execution of other job alternatives.  Figure 1 shows a logical view of such integration using ICS.

    Figure1

    Figure1

    ICS integration could be implemented using “Basic Map Data” for each distinct flow or using “Orchestration” for more complex use cases.


    Taleo Connect Client (TCC)

    As stated previously, TCC provides the best way to integrate with Taleo Enterprise. TCC has design editor to author exports and imports and run configurations. It also could be run from command line to execute the import or export jobs. A link to another post introducing TCC is provided in References section.

    Figure3

    Figure 3

    Figure 3 shows a logical view of a solution using TCC and ICS. In this case, ICS orchestrates the flow by interacting with HCM and Taleo.   TCC is launched remotely through SOAP service. TCC, the SOAP launcher service and a staging file system are deployed to an IaaS compute node running Linux.


    Integrating Taleo with EBS using ICS and TCC

    Let’s look at a solution to integrate Taleo and EBS Human resources module, using ICS as the central point for scheduling and orchestration. This solution is suitable for on-going scheduled updates involving few hundred records for each run. Figure 4 represents the solution.

    Figure4

    Figure 4

    TCC is deployed to a host accessible from ICS. The same host runs a J-EE container, such as WebLogic or Tomcat. The launcher web service deployed to the container launches TCC client upon a request from ICS. TCC client, depending on the type of job, either writes a file to a staging folder or reads a file from the folder.  The staging folder could be local or on a shared file system, accessible to ICS via SFTP.  Here are the steps performed by the ICS orchestration.

    • Invoke launcher service to run a TCC export configuration. Wait for completion of the export.
    • Initiate SFTP connection to retrieve the export file.
    • Loop through contents of the file. For each row, transform the data and invoke EBS REST adapter to add the record. Stage the response from EBS locally.
    • Write the staged responses from EBS to a file and transfer via SFTP to folder accessible to TCC.
    • Invoke launcher to run a TCC import configuration. Wait for completion of the import.
    • At this point, bi-direction integration between Taleo and EBS is complete.

    This solution demonstrates the capabilities of ICS to seamlessly integrate SaaS applications and on-premise systems. ICS triggers the job and orchestrates export and import activities in single flow. When the orchestration completes, both, Taleo and EBS are updated. Without ICS, the solution would contain a disjointed set of jobs that could be managed by different teams and might require lengthy triage to resolve issues.


    Launching TCC client through a SOAP interface

    Taleo Connect Client could be run from command line to execute a configuration to export or import data. A Cron job or Enterprise Scheduling service (ESS) could launch the client. However, enabling the client to be launched through a service will allow a more cohesive flow in integration tier and eliminate redundant scheduled jobs.

    Here is a sample java code to launch a command line program. This code launches TCC code and wait for completion, capturing the command output. Note that the code should be tailored to specific needs and suitable error handing, and, tested for function and performance.

    package com.test.demo;
    import com.taleo.integration.client.Client;
    import java.io.BufferedReader;
    import java.io.InputStreamReader;
    public class tccClient {
        public boolean runTCCJoB(String strJobLocation) {
            Process p=null;
            try {
                System.out.println("Launching Taleo client. Path:" + strJobLocation);
                String cmd = "/home/runuser/tcc/scripts/client.sh " + strJobLocation;
                p = Runtime.getRuntime().exec(cmd);
    	//Read both Input and Error streams.
                ReadStream s1 = new ReadStream("stdin", p.getInputStream());
                ReadStream s2 = new ReadStream("stderr", p.getErrorStream());
                s1.start();
                s2.start();
                p.waitFor();
                return true;
            } catch (Exception e) {
                //log and notify as appropriate
                e.printStackTrace();
                return false;
            } finally {
                if (p != null) {
                    p.destroy();
                }
            }
        }
    }

    Here is a sample service for a launcher service using JAX-WS and SOAP.

    package com.oracle.demo;
    import javax.jws.WebService;
    import javax.jws.WebMethod;
    import javax.jws.WebParam;
    
    @WebService(serviceName = "tccJobService")
    public class tccJobService {
    
        @WebMethod(operationName = "runTCCJob")
        public String runTCCJob(@WebParam(name = "JobPath") String JobPath) {
            try{
            //tccClient().runTCCJob(JobPath);
            return new tccClient().runTCCJoB(JobPath) ;
            }
            catch(Exception ex)
            {
                ex.printStackTrace();
                return ex.getMessage();
            }
        }
    }

    Finally, this is a SOAP request that could be sent from an ICS orchestration, to launch TCC client.

    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:demo="http://demo.oracle.com/">
       <soapenv:Header/>
       <soapenv:Body>
          <demo:runTCCJob>
             <!--Optional:-->
             <JobPath>/home/runuser/tcc/exportdef/TCC-Candidate-export_cfg.xml</JobPath>
          </demo:runTCCJob>
       </soapenv:Body>
    </soapenv:Envelope>

    Summary

    This post addressed alternative patterns to integrate with Taleo Enterprise Edition, along with pros and cons of each pattern. It explained a demo solution based on the recommended pattern using TCC and provided code snippets and steps to launch TCC client via web service. At the time of this post’s publication, ICS does not offer Taleo-specific adapter. A link to current list of supported adapters is provided in references section.

     

    References

    ·        Getting started with Taleo Connect Client (TCC) – ATeam Chronicles

    ·        Taleo business edition REST API guide

    ·        Latest documentation for Integration Cloud Service

    ·        Currently available ICS adapters

    Fusion Applications WebCenter Content Integration – Automating File Import/Export

    $
    0
    0
    Introduction Oracle WebCenter Content, a component of Fusion Middleware, is a strategic solution for the management, security, and distribution of unstructured content such as documents, spreadsheets, presentations, and video. Oracle Fusion Applications leverages Oracle WebCenter Content to store all marketing collateral as well as all attachments. Import flow also uses it to stage the CSV files […]

    Fusion HCM Cloud Bulk Integration Automation

    $
    0
    0
    Introduction Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from cloud. The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL […]
    Viewing all 74 articles
    Browse latest View live