execute program TIFTrigger -method newJob 1.2.3.4 tvc:jobcfg/NAME_OF_CONFIG.xml;
23 August 2013
An outbound integration job is a job that exports data from ENOVIA/3DExperience into another system (or systems) typically through some ESB (Enterprise Service Bus).
This kind of job is typically started based upon some event inside ENOVIA/3DExperience, for example a lifecycle promote event or new object revision event triggers the job. This kind of job may also be started on demand or via a scheduled event.
You can trigger a job manually from the ENOVIA/3DExperience-MQL client like this example shows
execute program TIFTrigger -method newJob 1.2.3.4 tvc:jobcfg/NAME_OF_CONFIG.xml;
1.2.3.4
is an ENOVIA/3DExperience object identifier
You may for configurations in the default namespace omit the prefix tvc:jobcfg/
The jobs invoked in this manner are executed asynchronously, which is the normal usage pattern. If you need to invoke jobs in a synchronous way, please read this chapter. |
Read also this document for more information.
An integration job that will extract data from the ENOVIA/3DExperience database and send it to some destination is called "Transfer Data Job".
To define this kind of job, you need to create a job definition/configuration that defines first how to extract the data (e.g. the payload) but also define the destinations, which the data is transferred to.
Transfer data jobs support job events for handling errors etc. Read more in the "Job Event" chapter.
Before digging into the details of the configuration format for such a job, we will start by showing an example of how such a configuration can look alike.
<Job>
<TransferData>
<Payload>tvc:payload/PartReleased.xml</Payload>
<Destinations>
<SysOut/>
<File id="fileDest1"/>
<Http id="httpDest1"/>
<RabbitMQ id="rabbitMQ1">
<RetryAttempts>10</RetryAttempts>
<RetryDelay>5000,10000,20000,30000,60000</RetryDelay>
<Header name="type" value="${job.rpe.TYPE}" type="string"/>
<HeaderProvider>name of class implementing com.technia.tif.enovia.job.destination.HeaderProvider</HeaderProvider>
</RabbitMQ>
</Destinations>
</TransferData>
<Events>
<Error>
<SendMail>
<TO>...</TO>
<CC>...</CC>
<Subject>...</Subject>
<Message>
message...
${STACK_TRACE}
</Message>
</SendMail>
</Error>
</Events>
</Job>
The root tag in the definition is always <Job>
. The first child element of the Job element defines the kind of job, in this case we will use the <TransferData>
element to define the integration job.
Below the TransferData element, the following child elements are available.
<Payload>
Defines the name of the payload definition that defines how the payload is being created.
<Destinations>
Defines the destinations, to which the payload will be transferred.
<TransactionType>
Defines the ENOVIA/3DExperience transaction type to be used during the job.
Each of these elements are defined in more detail in the sub-pages.
The Payload element points out the configuration that defines how to generate the payload (e.g. the data) from ENOVIA/3DExperience.
The value of the Payload element is either the XML configuration of type "payload" OR a Java class. See below for the two alternatives to define this.
Referencing a Payload Definition
Refer to payload configuration that contains the rules how to generate the payload.
<Job>
<TransferData>
<Payload>tvc:payload:namespace/ReleasedPart.xml</Payload>
The referenced Payload configuration is the file cfg/namespace/payload/ReleasedPart.xml
.
See page Payload Definition for details on how to define the payload definition.
Referencing a Custom Payload Generator
Refer to payload generator implemented in Java.
<Job>
<TransferData>
<Payload>java:com.acme.integration.MyPayloadGenerator</Payload>(1)
1 | Note that we use the "java:" prefix. The class must implement the interface com.technia.tif.enovia.payload.Payload |
The Destinations element within the <TransferData>
parent element contains the target destination (or destinations),
which the generated Payload will be transferred to.
The connection specific details for a connection is specified in a centralized file. Please look at the page Configure Destinations for further details on this topic.
The table below shows the supported destination elements.
Element | Descriptions | Supports Additional Headers | Supports Retry Attempt |
---|---|---|---|
|
Useful during development to see the payload printed out to the system output. |
No |
No |
|
Sends the payload into a file. |
No |
No |
|
Sends the payload to a HTTP endpoint. |
Yes |
Yes |
|
Sends the payload to a SOAP service |
No |
Yes |
|
Sends the payload to a JMS queue or topic |
Yes |
Yes |
|
Sends the payload to an Rabbit MQ queue/topic (via AMQP) |
Yes |
Yes |
|
Sends the payload to a Native MQ destination |
Yes |
Yes |
|
Sends the payload to a email recipient |
No |
No |
|
Sends the payload to an Apache Kafka topic |
Yes |
No |
|
Defines a custom destination class. The class must extend from the base class |
- |
Yes (if class implements Retryable) |
The destination element(s) contains an "id" attribute, which refers to a corresponding destination definition within the However, the
|
Example
The example below will send the payload to:
the system-out (standard output)
file destination as configured with the ID "fileDest1" in the destinations.xml file
custom destination implemented by class "com.acme.integration.MyCustomDestination"
<Job>
<TransferData>
<Destinations>
<SysOut/>
<File id="fileDest1"/>
<SOAP id="webservice-test"/>
<Custom className="com.acme.integration.MyCustomDestination"/>
</Destinations>
</TransferData>
</Job>
You may specify conditions that must be met in order to transfer the payload to a certain destination.
Within the job configuration, the destination supports two different attributes called if
and unless
.
Here you specify the name (or names) of a property/parameter that must be OR must not be present on the Job itself in order to transfer the data to the destination in question.
Below is an example of using a conditionally inclusion of a destination:
<Job>
...
<TransferData>
...
<Destinations>
<Http id="http-1" if="send.to.1" />
<Http id="http-2" unless="send.to.1" />
</Destinations>
</TransferData>
</Job>
The subchapters below describes the configuration aspects for a JMS destination.
You can specify a destination to which replies are generated. This is done via the replyTo
attribute.
The value of this attribute must be the ID of an existing destination inside the destinations configuration file.
See chapter Configure Destinations for more information.
Below is an example how to configure the replyTo.
<JMS id="dest-id" replyTo="reply-dest-id" />
And within the destinations.xml
file:
<JMS id="dest-id"
initialContextFactory="org.apache.activemq.jndi.ActiveMQInitialContextFactory"
providerURL="tcp://172.16.16.141:61616">
<Queue name="part.info" jndiKey="queue.%s"/>
</JMS>
<JMS id="reply-dest-id"
initialContextFactory="org.apache.activemq.jndi.ActiveMQInitialContextFactory"
providerURL="tcp://172.16.16.141:61616">
<Queue name="part.info.reply" jndiKey="queue.%s"/>
</JMS>
When you have set the attribute replyTo, then you should setup a so called reply-handler that is able to update TIF when the reply arrives in order to track whether or not the integration job failed or not. If you do not set up such reply handler, your jobs will stay in the state "Awaiting Reply". See chapter [Reply Handler] for further details. |
The correlation id of a message is by default set to:
${tif.instance.id}|${job.id}|${destination.id}
This value contains macros, which during runtime are resolved to real values. The information in the correlation id is used for example by the reply handler to correlate a message back to its origin.
You may change this, but, please note that if you do then you must consider so in the reply-handler that you might use.
Below is an example how to configure the correlation id.
<JMS id="dest-id" correlationId="${job.id}/${destination.id}" />
The macros are described here |
You may specify a value for the JMS type property as shown below.
<JMS type="something" />
<JMS type="${macro}" />
The macros are described here |
You may specify a value for the JMS message priority as shown below. The priority is a value between 0 and 9.
<JMS priority="4" />
You may specify persistent or non-persistent delivery modes by specifying this
in the deliveryMode
attribute as shown below.
<JMS deliveryMode="non-persistent"/>
<JMS deliveryMode="persistent"/>
When seding data to a JMS destination, TIF will by default use a StreamMessage when sending the content.
If you for some reason want to change the type, you can do so as shown below.
<JMS id="the-id" messageType="text"/>
<JMS id="the-id" messageType="byte"/>
<JMS id="the-id" messageType="stream"/>
Some destinations allow additional meta-data to be passed along with the payload itself. Such meta-data is called headers and consists of key/value pairs.
A header value can be static or dynamic. It is also possible to provide a custom class that provides the headers. These are exemplified below:
<Job>
<TransferData>
...
<Destinations>
<JMS ...>
<!-- Static parameter -->
<Header name="test1" value="bar"/>
<!-- Dynamic parameter, value taken from RPE (ENOVIA Runtime Program Env) -->
<Header name="test2" value="${job.rpe.TYPE}"/> (1)
<!-- Dynamic parameter, value taken from the additional arguments -->
<!-- passed via the trigger program object in ENOVIA for this job -->
<Header name="test3" value="${paramName}"/>
<!-- Define a custom header provider -->
<!-- Such class must implement the interface: -->
<!-- com.technia.tif.enovia.job.destination.HeaderProvider -->
<HeaderProvider>com.acme.foo.MyHeaderProvider</HeaderProvider>
</JMS>
...
1 | The macros are described in this document |
The subchapters below describes the configuration aspects for a Rabbit MQ destination.
You can control the following aspects of a Rabbit MQ message:
Reply To
Correlation ID
Routing Key
Type
Priority
Delivery Mode
User ID
Application ID
You can specify an exchange to which replies are generated.
This is done via the replyTo
attribute.
The value of this attribute must be the name of an exchange in your Rabbit MQ broker.
Below is an example how to configure the replyTo.
<RabbitMQ id="rabbit-mq-dest-id" replyTo="NAME-OF-EXCHANGE" />
If a job is sent to a Rabbit MQ destination whose replyTo has been set, the job will after being processed get the status "Awaiting Reply."
When you have set the attribute replyTo, then you should setup a so called reply-handler that is able to update TIF when the reply arrives in order to track whether or not the integration job failed or not. If you do not set up such reply handler, your jobs will stay in the state "Awaiting Reply". See this chapter for further details. |
The routing key can be defined in the destinations.xml file, but a more flexible approach is to define the routing key per use-case.
The routing key can be defined as macro, allowing to resolve dynamic value based upon some parameter etc.
Example:
<RabbitMQ id="rabbit-mq-dest-id"
routingKey="${job.param.NAME-OF-PARAM}" />
The macros are described here |
The correlation id of a message is by default set to:
${tif.instance.id}|${job.id}|${destination.id}
This value contains macros, which during runtime are resolved to real values. The information in the correlation id is used for example by the reply handler to correlate a message back to its origin.
You may change this, but, please note that if you do then you must consider so in the reply-handler that you might use.
Below is an example how to configure the correlation id.
<RabbitMQ id="rabbit-mq-dest-id"
replyTo="NAME-OF-EXCHANGE"
correlationId="${job.id}/${destination.id}" />
The macros are described here |
You may specify a value for the type property as shown below.
<RabbitMQ type="something" />
<RabbitMQ type="${macro}" />
The macros are described here |
You may specify a value for the message priority as shown below. The priority is a value between 0 and 255.
<RabbitMQ priority="4" />
You may specify persistent or non-persistent delivery modes by specifying this
in the deliveryMode
attribute as shown below.
<RabbitMQ deliveryMode="non-persistent"/>
<RabbitMQ deliveryMode="persistent"/>
The subchapters below describes the configuration aspects for a IBM MQ / Native MQ destination.
You can specify a destination to which replies are generated. This is done via the replyTo
attribute.
The value of this attribute must be the ID of an existing native-mq destination inside the destinations configuration file.
See chapter Configure Destinations for more information.
Below is an example how to configure the replyTo.
<NativeMQ id="mq-dest-id" replyTo="mq-reply-dest-id" />
And within the destinations.xml
file:
<NativeMQ id="mq-dest-id"
queueManagerName="QM_technia_mq"
hostName="172.16.16.141"
port="1414"
characterSet="1208"
encoding="546"
channel="S_technia_mq"
connectOptions="">
<Queue name="partdata_req" options="INPUT_AS_Q_DEF,OUTPUT"/>
</NativeMQ>
<NativeMQ id="mq-reply-dest-id"
queueManagerName="QM_technia_mq"
hostName="172.16.16.141"
port="1414"
characterSet="1208"
encoding="546"
channel="S_technia_mq"
connectOptions="">
<Queue name="partdata_resp" options="INPUT_AS_Q_DEF,OUTPUT"/>
</NativeMQ>
When you have set the attribute replyTo, then you should setup a so called reply-handler that is able to update TIF when the reply arrives in order to track whether or not the integration job failed or not. If you do not set up such reply handler, your jobs will stay in the state "Awaiting Reply". See this chapter for further details. |
The correlation id of a MQ message is by default set to and ID, which resolves to the ID of the transfer.
A correlation id of a MQ message can only contain maximum 24 bytes. |
This value accepts macros, which during runtime are resolved to real values. The information in the correlation id is used for example by the reply handler to correlate a message back to its origin.
You may change this, but, please note that if you do then you must consider so in the reply-handler that you might use.
Below is an example how to configure the correlation id.
<NativeMQ id="dest-id" correlationId="${some.macro}" />
See this section for more details regarding macros.
A MQ Message sent from a TIF server contains per default the tif instance id as group id value. That information is used by reply handlers to only filter messages originating from a particular TIF instance.
If you only use one TIF instance, you may disable the use of group id’s.
To disable the use of group id’s, either specify this on the destination
element as shown below or globally in tif.custom.properties
using the
property nativeMQ.defaultUseGroupId = false
.
<NativeMQ id="..." setGroupId="false" />
There are different strategies available how to send the message. You may choose between one of the following
The message read into a string and is sent using the writeString
on the MQMessage
The message read into a string and is sent using the writeUTF
on the MQMessage
The message bytes are written using the write
method on the MQMessage.
You configure the strategy as shown below
<NativeMQ id="..." type="string" />
The default strategy is byte
.
On the destination definition you may specify character-set, encoding, priority and expiracy. See this section for more details.
However, you may override these on the <NativeMQ>
element per use case also.
Example:
<NativeMQ id="..."
encoding="546"
characterSet="1208"
priority="7"
expiracy="604800" />
The File destination contains one configurable options, e.g. define whether or not the result is updated asynchronously.
Example:
<File id="file-1" asyncReply="true" />
If you utilize this feature you should configure a so called reply handler. You can read more about reply handlers here |
The Kafka destination references the dito kafka-destination from the file ${TIF_ROOT}/etc/destinations.xml
file using the id attribute.
See this document for more details.
In your job configuration, you use the element <Kafka>
to send data to a Kafka topic.
You can define the topic to transfer the data to using the topic
attribute as shown below.
If you omit this attribute, the Kafka handler will try to resolve the topic from the destination with the given id.
<Job>
<TransferData>
<Payload>...</Payload>
<Destinations>
<Kafka id="kafka-1" (1)
topic="TIF-JOB-TEST" /> (2)
</Destinations>
</TransferData>
</Job>
1 | Reference the destination via the id attribute |
2 | Here you define the topic to use. Note that you could define a topic in the core destination definition, and omit this one, or vice versa. The topic defined here takes precedence. |
You can via the attribute keyMacro
specify a macro that will resolve what key the produced Kafka record will use. Note that if you omit the keyMacro, then no key will be associated with the record.
Example below:
<Job>
<TransferData>
<Payload>...</Payload>
<Destinations>
<Kafka id="kafka-1" topic="TIF-JOB-TEST" keyMacro="${job.source.name}" />
</Destinations>
</TransferData>
</Job>
This will use the name from the source object within ENOVIA, which the Job was created for.
Please see this document for details around using macros.
If you want to wait to complete the job after you have received a reply from Kafka, using a reply handler, you need to specify that the Kafka destination will produce a reply later.
This is controller via the asyncReply
attribute.
<Kafka id="kafka-1" asyncReply="true" ... />
You can add headers also with the Kafka record that is being created.
First of all, any header specified within the used destination of ${TIF_HOME}/etc/destinations.xml
will be added.
Additionally, you can specify extra headers per job configuration like shown below.
<Job>
<TransferData>
<Payload>...</Payload>
<Destinations>
<Kafka id="kafka-1" topic="TIF-JOB-TEST" keyMacro="${job.source.name}">
<Header name="replyTo" value="TIF-REPLY-TEST" />
<Header name="tifInstance" value="${tif.instance.id}" />
<Header name="jobId" value="${job.id}" />
<Header name="destinationId" value="${destination.id}" />
<Header name="sourceId" value="${job.source.id}" />
</Kafka>
</Destinations>
</TransferData>
</Job>
As illustrated, the headers also supports macros to allow passing in dynamic values.
Another example shown below:
<Job>
<TransferData>
...
<Destinations>
<Kafka ...>
<!-- Static parameter -->
<Header name="test1" value="bar"/>
<!-- Dynamic parameter, value taken from RPE (ENOVIA Runtime Program Env) -->
<Header name="test2" value="${job.rpe.TYPE}"/> (1)
<!-- Dynamic parameter, value taken from the additional arguments -->
<!-- passed via the trigger program object in ENOVIA for this job -->
<Header name="test3" value="${paramName}"/>
<!-- Define a custom header provider -->
<!-- Such class must implement the interface: -->
<!-- com.technia.tif.enovia.job.destination.HeaderProvider -->
<HeaderProvider>com.acme.foo.MyHeaderProvider</HeaderProvider>
</Kafka>
...
1 | The macros are described in this document |
The Email destination will by default include the content of the payload in the email as is (e.g. attachPayload is by default false). You can configure to attach the payload as a file to the mail instead, this is shown below.
<Email id="email-dest-1" attachPayload="true" attachAs="payload"/>
The attachAs
value is by default the static text payload
, and you may change this to something more appropriate.
The value may be a macro that resolves to some dynamic string.
The HTTP destination references the dito http-destination from the file ${TIF_ROOT}/etc/destinations.xml
file using the id attribute.
See this document for more details.
The subchapter below describes the configuration aspects for an HTTP destination.
In the ${TIF_ROOT}/etc/destinations.xml
file, the url for the http destination is defined.
This URL may contain macros, which are resolved at runtime when the job is being executed.
The macro syntax is described in this document.
An example of this is shown below
<Destinations>
<Http id="http-1"
url="https://acme.org/z/service-a/${job.param.identifier}"
retryCount="1" retryDelay="1000" method="PUT">
</Destinations>
When transfering the payload to a destination, you can configure the Http request to send the file as a multipart request.
<Job>
...
<Destinations>
<Http id="system-x" multiPart="true" />
</Destinations>
</Job>
Per default the file is send using the parameter file
but the attribute multiPartFileParam
can be used to change this to something else.
Also, the attribute multiPartFileName
may be used to specify a custom file name when sending the file.
When data is transferred to an HTTP destination, the HTTP response status code is evaluated. By default, status code "2xx" is considered as success status code (e.g. any code between 200-299 inclusive).
To override the default behavior, you can configure a custom list of success status codes via <SuccessStatusCodes>
configuration element. The element text must contain a comma separated list of codes.
Below is an example where "200 OK", "201 Created" and "202 Accepted" are considered as success status codes.
<Http id="my-http-dest">
<SuccessStatusCodes>200, 201, 202</SuccessStatusCodes>
</Http>
You may implement a custom Java class that can evaluate the HTTP response. The class must implement the interface com.technia.tif.enovia.job.destination.HttpStatusEvaluator
.
The interface is defined as below:
package com.technia.tif.enovia.job.destination;
import com.technia.tif.core.annotation.API;
import com.technia.tif.core.io.http.HttpResponse;
import com.technia.tif.enovia.job.EnoviaJob;
@API
public interface HttpStatusEvaluator {
/**
* Evaluates the HTTP response.
*
* @param job Job object
* @param response HTTP response.
* @return Whether the status evaluates to true or false.
*/
boolean evaluate(EnoviaJob job, HttpResponse response);
}
An example configuration:
<Http id="my-http-dest">
<StatusEvaluator className="com.acme.tif.MyStatusEvaluator" />
</Http>
Sometimes it is not possible to configure the URL for HTTP destination in a static way, but it requires some input from the job being executed. To enable this, you may implement a custom Java class that can provide the URL.
The class must implement the interface com.technia.tif.enovia.job.destination.URLProvider
.
The interface is defined as below:
package com.technia.tif.enovia.job.destination;
import com.technia.tif.core.annotation.API;
import com.technia.tif.enovia.job.EnoviaJob;
@API
public interface URLProvider {
/**
* Provides URL. The implementation is responsible for possible URL
* encoding.
*
* @param job Job object
* @param destUrl URL configured in destination config
* @return URL
*/
String provide(EnoviaJob job, String destUrl);
}
An example configuration:
<Http id="my-http-dest">
<UrlProvider className="com.acme.tif.urlprovider.MyUrlProvider" />
</Http>
The subchapter below describes the configuration aspects for a custom destination.
When tranferring data to a custom destination, it might be useful to store information related to the transfer and view it in Admin UI afterwards.
This is possible by annotating field(s) containing the information with the annotation com.technia.tif.core.transfer.TransferProperty
.
The overall size of the information to be stored in TIF database is limited to 128k. |
For example:
import com.technia.tif.core.transfer.TransferProperty;
import com.technia.tif.enovia.job.TransferHandler;
public class MyDestination extends TransferHandler {
@TransferProperty
private int SomeField = 12345;
@TransferProperty
private String OtherField = "Hello World";
...
}
In addition, you may also return an object containing transfer properties by overriding the method getTransferProperties()
in the class that implements the custom destination.
The returned object is also available in the method transfer()
via argument TransferHandlerContext
.
For example:
import com.technia.tif.enovia.job.EnoviaJob;
import com.technia.tif.enovia.job.TransferHandler;
import com.technia.tif.enovia.job.TransferHandlerContext;
public class MyDestination extends TransferHandler {
@Override
public Object getTransferProperties(EnoviaJob job) {
return new MyProperties();
}
}
and
import com.technia.tif.enovia.job.EnoviaJob;
import com.technia.tif.enovia.job.TransferHandler;
public class MyProperties {
@TransferProperty
final boolean AnotherField = true;
}
The destinations that supports being retryable if the transfer fails are listed in the table above. Such destination can be configured like shown below:
<Job>
<TransferData>
...
<Destinations>
<JMS ...>
<RetryAttempts>20</RetryAttempts>
<RetryDelays>1000,5000,10000</RetryDelays>
The <RetryAttempts>
element defines how many times TIF will try to resend the payload to the destination.
The time to wait between two attempts is defined within the <RetryDelays>
element. This value contains comma separated integer values where each value defines a period in milliseconds.
In the example above, TIF will wait 1 second between the 1:st and 2:nd attempt; 5 seconds between the 2:nd and 3:rd attempt and 10 seconds between the 3:rd and all attempts up to the 20:th. If the destination fails after the max count has been exceeded, the job will be marked as failed.
The default values are:
Retry Attempts: 10
Retry Delays: 1000,5000,10000,20000,40000,60000
During the execution of the integration job of type "transfer data", TIF will by default start a READ transaction inside the ENOVIA/3DExperience database.
If you for some reason want to change this, you can do so like the example below illustrates.
<Job>
<TransferData>
<!-- used for read transaction (default) -->
<TransactionType>read</TransactionType>(1)
<!-- To start an update transaction -->
<TransactionType>update</TransactionType>(2)
<!-- To not start a transaction at all, use the value of none or inactive -->
<TransactionType>none</TransactionType>(3)
</TransferData>
</Job>
1 | Starts a read transaction |
2 | Starts an update transaction |
3 | Does not start a transaction |
This kind of integration job is used for launching one or more external processes(es), with input that could consist of files that are checked-out from the ENOVIA/3DExperience database and/or just meta data created via a payload configuration.
The output from the external processing can either be checked back into the ENOVIA/3DExperience, for example if you perform conversion of files, or you can collect the output and send it to a destination, such as FTP or Webservice etc.
Before digging into the details of the configuration format for such a job, we will start by showing an example of how such a configuration can look alike.
The example below illustrates how to utilize the external process to convert files inside ENOVIA/3DExperience into different formats. For readability, only one conversion is included. It is however possible to run several executions to create multiple different output variants.
<Job>
<ExternalProcess>
<Prepare>
<MkDir dir="input" id="in" />
<MkDir dir="output" id="out" />
<Checkout format="generic" file="*.png,*.jpg,*.gif" into="${path:in}/${filename}" id="src-file" />
</Prepare>
<Exec forEachFileRef="src-file">
<Executable>/usr/bin/magick</Executable>
<Arguments>
<Arg>${path:src-file}</Arg>
<Arg>-resize</Arg>
<Arg>120x120</Arg>
<Arg>${path:out}${separator}${filename:src-file}</Arg>
</Arguments>
<PostActions>
<Checkin
src="${path:out}${separator}${filename:src-file}"
format="Thumbnail"
fileName="${filename:src-file}"
overwrite="true" />
</PostActions>
</Exec>
</ExternalProcess>
<Events>
<Error>
<SendMail>
<TO>...</TO>
<CC>...</CC>
<Subject>...</Subject>
<Message>
message...
${STACK_TRACE}
</Message>
</SendMail>
</Error>
</Events>
</Job>
Another example illustrating how to generate some output that later is transferred to another system is shown below:
<Job>
<ExternalProcess>
<Prepare>
<SetParam name="currentState" select="current" />
<SetParam name="someAttribute" select="attribute[Some Attribute]" />
<ExtractPayload id="payload" config="MyPayloadConfig.xml" into="data.xml" />
<Checkout id="src-file" format="generic" fileName="*.abc" into="${filename}" />
</Prepare>
<Exec>
<Executable>/usr/bin/app</Executable>
<Arguments>
<Arg>${path:payload}</Arg>
<Arg>${currentState}</Arg>
<Arg>${someAttribute}</Arg>
</Arguments>
</Exec>
<Output stdout="true" stderr="true">
<Match dir="/" name="*.*" />
</Output>
<Destinations>
<File id="..." />
</Destinations>
</ExternalProcess>
</Job>
Many configuration aspects are the same as for transfer data.
The main difference is the <ExternalProcess>
element, which is described in the next chapter.
The <ExternalProcess>
element allows the following content
<Prepare>
Defines any preparations required. For example checkout files from ENOVIA/3DExperience, extract some payload data, set some parameters or create directories.
<Exec>
Describes what external process to execute, and what argument to pass.
You can define multiple <Exec>
elements if you need to invoke different processes
or invoke with different arguments.
<Output>
Optional element used to describe what output to collect from the external process. This is often used
together with <Destinations>
.
<Destinations>
Defines destinations. Please read more here.
Each of these elements are defined in more detail in the next chapters.
Preparing the external process execution is normally required. You may for example do any of the below:
Whenever you launch an external process, a dedicated working directory is created and the process is launched
with that directory as working dir. You can create additional subdirectories here-in. That is accomplished by the
<MkDir>
element.
You may checkout one or more files from the object, which you launch the job for. That is done via the <Checkout>
element.
You can also create meta data content using a payload definition. This is accomplished via the <ExtractPayload>
element.
You can also set additional job-parameters that are available during the job execution.
This is accomplished via the <SetParam>
element.
For each of the prepare actions, you need to specify an id, which later is used when referencing the file or directory. The id is used in many macros, to resolve its path, absolute path, name or extension. Example:
|
Element | Attributes | Description | ||
---|---|---|---|---|
|
|
Creates a directory within the working dir. The dir attribute specifies the name of the directory. |
||
|
|
Checks out one or more files from ENOVIA.
The The |
||
|
|
Extracts the payload for the current object using the specified configuration and store the result in the specified file. |
||
|
|
Can be used to set job parameters resolved from the current object, which the job is executed around. In some cases when you only need one or a few attributes from the source object available, then it is easier to resolve these via the SetParam instead of using the ExtractPayload. The latter should be used when you have more data to be extracted. |
The <Exec>
element defines what to execute and what arguments to supply to the external process.
This element supports the following attributes:
Attribute | Description |
---|---|
forEachFileRef |
Defines an optional ID that refers to a file or files, which is provided by a prepare action. If the file reference ID refers to a collection of files, the executable will be executed for each of them. |
processTimeout |
Defines a timeout in milliseconds, which if exceeded will cause the external process to be stopped. Note that by default, no limit is specified. |
And the supported child elements are
Element | Attributes | Description |
---|---|---|
|
- |
Defines the executable to be invoked. Example: |
|
|
Defines additional environment variables to be set while during the process execution. |
|
- |
Defines the arguments to pass to the external process (see below). |
|
- |
Defines optional post actions to be performed. Currently, only Checkin is a supported post operation. |
The <Arguments>
element allows nested <Arg>
elements.
Each occurrence of such child element will result in an argument to be passed at the same position.
The <Arg> element may contain macros, which can be used to resolve against the file-ids from the preparation tasks OR against job parameters.
See: this document for more info around job macros.
|
To reference file/directory information created by the preparation actions, see the table below for information:
Example | Description |
---|---|
|
Resolves the path relative from the working dir to the file with the specified ID. |
|
Resolves the absolute path to the file with the specified ID |
|
Returns the file name of the file with the specified ID |
|
Returns the file name excluding its suffix of the file with the specified ID |
|
Returns the suffix of the file with the specified ID |
Other supported macros:
Example | Description |
---|---|
|
The path separator for the current platform |
|
The separator for the current platform |
|
The ID of the current business object |
|
The workdir for the current execution (the absolute path) |
The currently supported post actions defined below <PostActions>
are shown below:
Element | Attributes | Description |
---|---|---|
|
|
Checks in the source file into ENOVIA. |
The <Output>
element is used when you want to collect the output from your external process.
The output will typically be a ZIP file unless you only want to collect the standard output OR standard error from the external process.
To get standard output or standard error as output without being zipped, set either attribute "stdout" OR "stderr" true, not both. Do not include any Match child elements. |
This element supports the following attributes:
Attribute | Description |
---|---|
stdout |
Defines if to include standard output. Default is false. |
stderr |
Defines if to include standard error. Default is false. |
returnFirstMatch |
An option that allows by-passing the creation of a ZIP file and just return the first found file according to the match instructions.
Note that setting this to TRUE also requires at least one nested |
And the supported child elements are
Element | Attributes | Description |
---|---|---|
|
|
Defines what content to include in the generated ZIP file. |
The <Match>
element will define what files to be included in the generated ZIP file.
Use wildcards on the name attribute to match more than one file.
Configuration example:
<Output stdout="true">
<Match dir="${path:out-dir}" name="*.png" />
<Match dir="${path:out-dir}" name="*.gif" />
</Output>
You may in some cases need to create a job having customized logic. This is done by defining the class implementing the custom job logic inside the Executor tag as shown below.
<Job>
<Executor>com.acme.integration.executor.MyJobExecutor</Executor>
</Job>
The class must implement the interface com.technia.tif.enovia.job.JobExecutor
.
This interface is defined like below.
package com.technia.tif.enovia.job;
import com.technia.tif.core.ConfigurationException;
import com.technia.tif.core.ExtractionException;
import com.technia.tif.core.ValidationException;
import com.technia.tif.core.annotation.API;
/**
*
* @author Technia
* @since 31 okt 2012
*/
@API
public interface JobExecutor {
/**
* Executes the job using the provided data as input for the execution
*
* @param job The job to be executed
* @throws ConfigurationException May be thrown to indicate error in the
* configuration
* @throws ExtractionException May be thrown to indicate error while
* extracting data
* @throws ValidationException May be thrown to indicate error when
* validating the data being used/sent.
*/
void perform(EnoviaJob job) throws ExtractionException,
ConfigurationException,
ValidationException;
}
Custom jobs support job events for handling errors etc. Read more in the [Job Events] chapter.
By default, TIF does NOT start a transaction inside the ENOVIA/3DExperience database for the custom executor.
To change it, you can do so like the example below illustrates.
<Job>
<Executor className="com.acme.MyExecutor">
<TransferData>
<!-- used for read transaction -->
<TransactionType>read</TransactionType>(1)
<!-- To start an update transaction -->
<TransactionType>update</TransactionType>(2)
<!-- To not start a transaction at all, use the value of none or inactive (default) -->
<TransactionType>none</TransactionType>(3)
</TransferData>
</Executor>
...
</Job>
1 | Starts a read transaction |
2 | Starts an update transaction |
3 | Does not start a transaction |
It is possible to store outbound payload that is visible in Admin UI. Use com.technia.tif.enovia.job.log.Logger
.
For example:
import com.technia.tif.enovia.job.EnoviaJob;
import com.technia.tif.enovia.job.JobExecutor;
import com.technia.tif.enovia.job.log.Logger;
import com.technia.tif.enovia.payload.PayloadUtils;
public class MyJobExecutor implements JobExecutor {
@Override
public void perform(EnoviaJob job) {
Logger.storeOutboundPayload(job, new PayloadDataBuffer().append("Hello World!"));
}
}
Input payload provided by parent job can be accessed using com.technia.tif.enovia.payload.PayloadUtils
utility class.
Example code:
package com.acme.tif.executor;
import com.technia.tif.core.ExtractionException;
import com.technia.tif.enovia.job.EnoviaJob;
import com.technia.tif.enovia.job.JobExecutor;
import com.technia.tif.enovia.payload.PayloadData;
import com.technia.tif.enovia.payload.PayloadUtils;
import java.io.IOException;
import java.io.InputStream;
import org.apache.commons.io.IOUtils;
public class MyJobExecutor implements JobExecutor {
@Override
public void perform(EnoviaJob job) throws ExtractionException {
PayloadData data = PayloadUtils.getInputPayload(job);
try (InputStream in = data.getInputStream()) {
// TODO: Do something with the input stream, e.g:
String string = IOUtils.toString(in, data.getEncoding());
} catch (IOException e) {
throw new ExtractionException(e);
}
}
}
If you have a license to the "TVC File Manager" component, you are maybe familiar with the feature called File Package Download.
This feature lets you configure the creation of a ZIP package containing files checked-in to objects in ENOVIA/3DExperience, files containing meta-data from ENOVIA/3DExperience and/or other files.
The File Package configuration format is described in the "TVC File Manager" documentation.
The File Package job works in the same way as the <TransferData>
job, the only
difference is that you for the <FilePackage>
element need to define the file package
creation rules.
Note that the File Package feature requires both a valid "TVC File Manager" license and that TIF are using the "tvc-filemanager-nnn.jar" file.
The latter can be fulfilled by one of the below two ways:
You configure the TIF start-script to include the ENOVIA/3DExperience webapplication
And this webapplication contains the JAR file under the WEB-INF/lib directory
You manually copy the "tvc-filemanager-nnn.jar" file to the $TIF_ROOT/modules/enovia/lib/custom
folder
One way to create a file package is to point out a File Package configuration. The format of such configuration is defined in the TVC File Manager documentation.
Below is an example how to accomplish this:
<Job>
<Name>PDX Creation</Name>
<FilePackage>
<Config>tvc:fpd:tvx:enc/PDX.xml</Config> (1)
<Destinations>
<File id="file-dest-3"/>
</Destinations>
</FilePackage>
</Job>
1 | Point out the FPD configuration |
The <FilePackage> element supports one attribute called transferZIP .
This attribute accepts a boolean value and if set to false, each file in the package will be transferred individually.
|
A special TIF handler is provided as a part of the product. This handler can be used to specify what data to be included such as files and payload content into the generated package.
An example is shown below:
<Job>
<Name>PDX Creation</Name>
<FilePackage>
<Content>
<Files dataSet="tvc:dataset/PartSpecifications.xml"
saveIn="specifications/${id}/${format}">
<FileFilter>
<Exclude>
<Name>*.exe</Name>
<Name>*.sh</Name>
<Name>*.abc</Name>
<Format>format_JT</Format>
<Format>format_Secret</Format>
</Exclude>
</FileFilter>
</Files>
<Files dataSet="tvc:dataset/RefDocs.xml"
saveIn="refdocs/${id}/${format}">
<FileFilter>
<Include>
<Name>*.docx</Name>
<Name>*.xlsx</Name>
<Name>*.doc</Name>
<Name>*.xls</Name>
<Name>*.pdf</Name>
</Include>
</FileFilter>
</Files>
<Payload config="tvc:payload/BOM.xml" saveIn="data" saveAs="bom.xml" />
<Payload config="tvc:payload/Specs.xml" saveIn="data" saveAs="specs.xml" />
<Payload config="tvc:payload/RefDocs.xml" saveIn="data" saveAs="ref-docs.xml" />
</Content>
<Destinations>
<File id="file-dest-3"/>
</Destinations>
</FilePackage>
</Job>
Within the <Content>
element you may declare elements of type <Files>
and <Payload>
,
which will define the files to be added to the package including what payload/meta-data
to be included.
For the Files element you need to point out a data-set, which should return business objects that contains files. Each of the files found from these objects will be added, unless any exclusion/inclusion rules denies so, to the package.
Inclusion/Exclusion can be specified with either a file-name pattern or based upon format.
The <FileFilter>
can be shortened, and be written like these examples:
<FileFilter includeFormats="a,b,c" /> <FileFilter excludeFileNames="*.exe,*.dll" /> <FileFilter excludeFormats="a,b,c" /> <FileFilter includeFileNames="*.exe,*.dll" />
Note that the files found should be saved in a way, which prevents them from overwriting each other.
Typically you need to use object-id and format as a part of the folder name. The default saveIn
value
unless specified is files/${id}/${format}
.
For Payload data inclusion, you need to specify the payload configuration as minimum.
Optionally, you can specify the folder to save the data in with the saveIn
attribute.
The saveAs
attribute specifies the file name of the payload data. By default, the
file name is constructed using this file name format payload_%03d.xml
. The format
will get the payload sequence number as input. E.g the %03d
will then be converted
to "000" for the first payload.
A simpler configuration allows pointing out a so called File Package Download handler directly, without having to specify a FPD config that in turn points out the handler.
See below for an example:
<Job>
<Name>PDX Creation</Name>
<FilePackage>
<Handler className="com.acme.fpd.MyFPDHandler" /> (1)
<Destinations>
<File id="file-dest-3"/>
</Destinations>
</FilePackage>
</Job>
1 | Define the class name inline |
From a performance perspective one should avoid using synchronous integrations, since that may block the user from working with the ENOVIA/3DExperience system in a smooth way. However, in some situations it is necessary to run a job synchronously.
Remember that TIF is running in a different process than your ENOVIA/3DExperience app. If you have started a transaction and within that transaction performs a synchronous call to TIF, you need to extract the data you need (the payload) on the TIF server before you do the TIF call. Otherwise, you may due to transaction isolation not be able to read the correct data OR you may in worst case cause a deadlock. |
On the caller side, you typically use the method |
In order for the client to know where the TIF server is located, you need to set some environment variables.
The URL to the TIF server is resolved in the following order:
Java system parameter: System.getProperty("tif.server.url")
TVC Init parameter: tif.server.url
(set in web.xml or /WEB-INF/classes/tvc.properties).
ENOVIA/3DExperience RPE Parameter: TIF_SERVER_URL
ENOVIA/3DExperience Ini Parameter: TIF_SERVER_URL
Environment Variable: TIF_SERVER_URL
The default is /enovia/tif-internal. Unless you have changed this via the module settings file, you do not need to set this. Otherwise, this value is resolved in the same order as above:
Java system parameter: System.getProperty("tif.server.contextPath")
TVC Init parameter: tif.server.contextPath
(set in web.xml or /WEB-INF/classes/tvc.properties).
ENOVIA/3DExperience RPE Parameter: TIF_SERVER_CONTEXTPATH
ENOVIA/3DExperience Ini Parameter: TIF_SERVER_CONTEXTPATH
Environment Variable: TIF_SERVER_CONTEXTPATH
You can embed the invocation of a synchronous job in your own code by using the TIF classes like shown below:
...
import com.technia.tif.enovia.api.synch.InvokeJobResponse;
import com.technia.tif.enovia.client.synch.SynchCreateNewJob;
...
Map<String, String[]> paramMap = ...
SynchCreateNewJob req = new SynchCreateNewJob();
req.setJobCfg(jobCfg);
req.setParamMap(paramMap);
//req.setPayload(aString);
InvokeJobResponse response;
try {
response = req.run();
} catch (XMLException | IOException e) {
throw new AppException("Unable to run integration", e);
}
if (response.getHasError()) {
throw new AppException(response.getErrorMessage());
}
String firstResult = response.getFirstResult().getResponse();
...
If you use the TVC Structure Browser component within your ENOVIA/3DExperience web application, you can use a built-in processor for name-allocation from an external system.
Please look into this tutorial for additional information and usage.