Business Objects, the Business Objects logo, Crystal Reports, and Crystal Enterprise are
trademarks or registered trademarks of Business Objects SA or its affiliated companies in the
United States and other countries. All other names mentioned herein may be trademarks of
their respective owners.
Business Objects owns the following U.S. patents, which may cover products that are offered
and sold by Business Objects: 5,555,403, 6,247,008 B1, 6,578,027 B2, 6,490,593 and
6,289,352.
Welcome to the The Data Integrator Reference Guide . This guide provides
detailed information about the objects, data types, transforms, and functions
in the Data Integrator Designer.
This book contains the following chapters:
•Chapter 2: Data Integrator Objects — Describes options, properties, and
•Chapter 3: Smart Editor — Describes the editor that can be used to
•Chapter 4: Data Types — Describes the data types used in Data
•Chapter 5: Transforms — Describes the transforms included with Data
•Chapter 6: Functions and Procedures — Describes the functions
•Chapter 7: Data Integrator Scripting Language — Describes the Data
•Chapter 8: Metadata in repository tables and views — Describes the
•Chapter 9: Locales and Multi-Byte Functionality — Describes how Data
•Chapter 10: Reserved Words — Lists words that have special mean ing in
For source-specific information, such as information pertaining to a particular
back-office application, consult the supplement for that application.
attributes for objects, such as data flows and work flows.
create scripts, expressions, custom functions.
Integrator, and how Data Integrator handles data type conversions.
Integrator and how to use these transforms.
included with Data Integrator and how to use these functions.
Integrator scripting language and how you can use this language to
create scripts, expressions, and custom functions.
repository’s reporting tables and views that you can use to analyze an
Data Integrator application.
Integrator supports the setting of locales and multi-byte code pages for
the Designer, Job Server, and Access Server.
Data Integrator. You cannot use these words in names that you create,
such as names of data flows.
Who should read this guide
This and other Data Integrator product documentation assumes the following:
•You are an application developer, consultant or database administrator
working on data extraction, data warehousing, or data integration.
•You understand your source and target data systems, DBMS, legacy
systems, business intelligence, and messaging concepts.
16Data Integrator Reference Guide
•You understand your organization’s data needs.
•You are familiar with SQL (Structured Query Language).
•If you are interested in using this product to design real-time processing
you are familiar with:
•DTD and XML Schema formats for XML files
•Publishing Web Services (WSDL, HTTP/S and SOAP p rotocols, etc.)
•You are familiar with Data Integrator installation environments: Microsoft
Windows or UNIX.
Business Objects information resources
Consult the Data Integrator Getting Started Guide for:
•An overview of Data Integrator products and architecture
•Data Integrator installation and configuration information
•A list of product documentation and a suggested reading path
After you install Data Integrator, you can view technical documentation from
many locations. To view documentation in PDF format, you can:
•Select Start > Programs > Data Integrator version > Data Integrator
Documentation
•Release Notes
•Release Summary
•Technical Manuals
•Select one of the following from the Designer’s Help menu:
•Release Notes
•Release Summary
•Technical Manuals
•Select Help from the Data Integrator Administrator
You can also view and download PDF documentation by visiting Business
Objects Customer Support online. To access this Web site, you must have a
valid user name and password. To obtain your user name and password, go
to http://www.techsupport.businessobjects.com and click Register.
and choose:
Introduction
About this guide
1
Data Integrator Reference Guide17
Introduction
1
About this guide
18Data Integrator Reference Guide
Introduction
About this guide
1
Data Integrator Reference Guide19
Introduction
1
About this guide
20Data Integrator Reference Guide
Data Integrator Reference Guide
Data Integrator Objects
chapter
Data Integrator Objects
2
This chapter contains reference information about general Data Integrator
objects, such as data flows, jobs, and work flows. Topics include:
•Characteristics of objects
•Descriptions of objects
Note: For information about source-specific objects, consult the reference
chapter of the Data Integrator supplement document for that source.
22Data Integrator Reference Guide
Characteristics of objects
This section discusses common characteristics of all Data Integrator objects.
Specifically, this section discusses:
•Object classes
•Object options, properties, and attributes
Object classes
An object’s class determines how you create and retrieve the object. There
are two classes of objects:
•Reusable objects
•Single-use objects
Reusable objects
After you define and save a reusable object, Data Integrator stores the
definition in the repository. You can then reuse the definition as often as
necessary by creating calls to the definition.
Most objects created in Data Integrator are available for reuse. You access
reusable objects through the object library.
A reusable object has a single definition; all calls to the object refer to that
definition. If you change the definition of the object in one place, and then
save the object, the change is reflected to all other calls to the object.
A data flow, for example, is a reusable object. Multiple jobs, such as a weekly
load job and a daily load job, can call the same data flow. If the data flow is
changed, both jobs call the new version of the data flow.
When you drag and drop an object from the object library, you are creating a
new reference (or call) to the existing object definition.
You can edit reusable objects at any time independent of the current open
project. For example, if you open a new project, you can go to the object
library , open a dat a flow, and edit it. The object will remain “dirty” (that is, your
edited changes will not be saved) until you explicitly save it.
Functions are reusable objects that are not available in the object library . Data
Integrator provides access to these objects through the function wizard
wherever they can be used.
Some objects in the object library are not reusable in all instances:
•Datastores are in the object library because they are a method for
categorizing and accessing external metadata.
Data Integrator Objects
Characteristics of objects
2
Data Integrator Reference Guide23
Data Integrator Objects
2
Characteristics of objects
•Built-in transforms are “reusable” in that every time you drop a transform,
a new instance of the transform is created.
Saving reusable objects
“Saving” a reusable object in Data Integrator means storing the language that
describes the object to the repository. The description of a reusable object
includes these components:
•Properties of the object
•Options for the object
•Calls this object makes to other objects
•Definition of single-use objects called by this object
If an object contains a call to another reusable object, only the call to the
second object is saved, not changes to that object’s definition.
Data Integrator stores the description even if the object does not validate.
Data Integrator saves objects without prompting you:
•When you import an object into the repository.
•When you finish editing:
•Datastores
•Flat file formats
•XML Schema or DTD formats
Y ou can explicitly save the reusable object currently open in the wo rkspace by
choosing
workspace, the
To save all objects in the repository that have changes, choose
the
Data Integrator also prompts you to save all objects that have changes when
you execute a job and when you exit the Designer.
Save from the Project menu. If a single-use object is open in the
Save command is not available.
Save All from
Project menu.
Single-use objects
Single-use objects appear only as components of other objects. They ope rate
only in the context in which they were created.
Saving single-use objects
“Saving” a single-use object in Data Integrator means storing the language
that describes the object to the repository. The description of a single-use
object can only be saved as part of the reusable object that calls the singleuse object.
Data Integrator stores the description even if the object does not validate.
24Data Integrator Reference Guide
Object options, properties, and attributes
Each object is associated with a set of options, properties, and attributes:
•Options control the operation of an object. For example, in a datastore,
an option is the name of the database to which the datastore connects.
•Properties document an object. For example, properties include the
name, description of an object, and the date on which it was created.
Properties merely describe an object; they do not affect an object’s
operation.
To view properties, right-click an object and select
•Attributes provide additional information about an object. Attribute values
may also affect an object’s behavior.
To view attributes, double-click an object from an editor and click the
Attributes tab.
Descriptions of objects
This section describes each Data Integrator object and tells you how to
access that object.
The following table lists the names and descriptions of objects available in
Data Integrator:
Data Integrator Objects
Descriptions of objects
Properties.
2
ObjectClassDescription
AnnotationSingle-use Describes a flow, part of a flow, or a diagram in the workspace.
CatchSingle-use Specifies the steps to execute if a given error occurs while a job
is running.
COBOL
copybook file
format
ConditionalSingle-use Specifies the steps to execute based on the result of a
Batch JobReusableDefines activities that Data Integrator executes at a given time
Data flowReusableSpecifies the requirements for extracting, transforming, and
ReusableDescribes the structure defined in a COBOL copybook file.
condition.
including error, monitor and trace messages.
Jobs can be dropped only in the project tree. The object
created is a direct reference to the object in the object library.
Only one reference to a job can exist in the project tree at one
time.
loading data from sources to targets.
Data Integrator Reference Guide25
Data Integrator Objects
2
Descriptions of objects
ObjectClassDescription
DatastoreSingle-use Specifies the connection information Data Integrator needs to
access a database or other data source. Cannot be dropped.
DocumentReusableAvailable in certain adapter datastores, documents are data
structures that can support complicated nested schemas.
DTDReusableA description of an XML file or message. Indicates the format
an XML document reads or writes. See also: XML Schema
File formatReusableIndicates how flat file data is arranged in a source or target file.
FunctionReusableReturns a value.
LogSingle-use Records information about a particular execution of a single
job.
Message
function
Outbound
message
ProjectSingle-use Groups jobs for convenient access.
Query transform Single-use Retrieves a data set that satisfies conditions that you specify.
Real-time jobReusableDefines activities that Data Integrator executes on-demand.
ScriptSingle-use Evaluates expressions, calls functions, and assigns values to
SourceSingle-use An object from which Data Integrator reads data in a data flow.
TableReusableIndicates an external DBMS table for which metadata has been
TargetSingle-use An object in which Data Integrator loads extracted and
ReusableAvailable in certain adapter datastores, message functions can
accommodate XML messages when properly configured.
ReusableAvailable in certain adapter datastores, outbound messages
are XML-based, hierarchical communications that real-time
jobs can publish to adapters.
Real-time jobs are created in the Designer, then configured and
run as services associated with an Access Server in the
Administrator. Real-time jobs are designed according to data
flow model rules and run as a request-response system.
variables.
imported into Data Integrator, or the target table into which data
is or has been placed.
A table is associated with its datastore; it does not exist
independently of a datastore connection. A table retrieves or
stores data based on the schema of the table definition from
which it was created.
transformed data in a data flow.
26Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
ObjectClassDescription
Template tableReusableA new table you want added to a database.
All datastores except SAP R/3 datastores have a default
template that you can use to create any number of tables in the
datastore.
Data Integrator creates the schema for each instance of a
template table at runtime. The created schema is based on the
data loaded into the template table.
TransformReusablePerforms operations on data sets.
Requires zero or more data sets; produces zero or one data set
(which may be split).
TrySingle-use Introduces a try/catch block.
While loopSingle-use Repeats a sequence of steps as long as a condition is true.
Work flowReusableOrders data flows and operations supporting data flows.
XML fileSingle-use A batch or real-time source or target. As a source, an XML file
translates incoming XML-formatted data into data that D
NTEGRATOR can process. As a target, an XML file translates the
I
data produced by a data flow, including nested data, into an
XML-formatted file.
XML messageSingle-use A real-time source or target. As sources, XML messages
translate incoming XML-formatted requests into data that a
real-time job can process. As targets, XML messages translate
the result of the real-time job, including hierarchical data, into
an XML-formatted response and sends the messages to the
Access Server.
XML SchemaReusableA description of an XML file or message. Indicates the format
an XML document reads or writes. See also: DTD
XML templateSingle-use A target that creates an XML file that matches a particular input
schema. No DTD or XML Schema is required.
ATA
2
Data Integrator Reference Guide27
Data Integrator Objects
2
Descriptions of objects
Annotation
Class
Single-use
Access
Click the annotation icon in the tool palette, then click in the workspace.
Description
Annotations describe a flow, part of a flow, or a diagram in a workspace. An
annotation is associated with the job., work flow, or data flow where it
appears. When you import or export that job, work flow, or data flow, you
import or export associated annotations.
For more information, see “Creating annotations” on page 59 of the Data Integrator Designer Guide.
Note: An annotation has no options or properties.
28Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Batch Job
Class
Reusable
Access
•In the object library, click the Jobs tab.
•In the project area, select a project and right-click Batch Job.
Description
Note: For information specific to SAP R/3, see Data Integrator Supplement
for SAP.
A batch job is a set of objects that you can schedule and execute together.
For Data Integrator to execute the steps of any object, the object must be part
of a job.
A batch job can contain the following objects:
•Data flows
•Sources
•Transforms
•Targets
•Work flows
•Scripts
•Conditionals
•Try/catch blocks
•While Loops
Y ou can run b atch jobs such that you can automatically recover from jobs that
do not execute successfully. During automatic recovery, Data Integrator
retrieves the results from steps that were successfully completed in the
previous run and executes all other steps. Specifically, Data Integrator
retrieves results from the following types of steps:
•Work flows
•Data flows
•Script statements
•Custom functions (stateless type only)
•SQL function
•EXEC function
2
Data Integrator Reference Guide29
Data Integrator Objects
2
Descriptions of objects
•get_env function
•rand function
•sysdate function
•systime function
Batch jobs have the following built-in attributes:
AttributeDescription
NameThe name of the object. This name appears on the
object in the object library and in the calls to the
object.
DescriptionYour description of the job.
Date createdThe date when the object was created.
Batch and real-time jobs have properties that determine what information
Data Integrator collects and logs when running the job. You can set the
default properties that apply each time you run the job or you can set
execution (run-time) properties that apply for a particular run. Execution
properties override default properties.
To set default properties, select the job in the project area or the object library,
right-click, and choose
Execution properties are set as you run a job. To set execution properties,
right-click the job in the project area and choose
validates the job and opens the Execution Properties window.
You can set three types of execution properties:
Properties to open the Properties window.
Execute. The Designer
•Parameters
•Trace properties
•Global variables
For an introduction to using global variables as job properties and
selecting them at runtime, see “Setting global variable values” on
page 286 of the Data Integrator Designer Guide.
Parameters
Use parameter options to help capture and diagnose errors using log, View
Data, or recovery options.
Data Integrator writes log information to one of three files (in the
$LINK_DIR\log\Job Server name\repository name directory):
•Monitor log file
•Trace log file
30Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
•Error log file
You can also select a system configuration and a Job Server or server group
from the
Select the
OptionsDescription
Monitor sample
rate
(# of rows)
Print all trace
messages
Enable recovery(Batch jobs only) Select this check box to enable the
Recover from last
failed execution
Parameters tab of the Execution Properties window.
Parameters tab to set the following options.
Enter the number of rows processed before Data
Integrator writes information to the monitor log file and
updates job events. Data Integrator writes information
about the status of each source, target, or transform.
For example, if you enter 1000, Data Integrator updates
the logs after processing 1,000 rows.
The default is 1000. When setting the value, you must
evaluate performance improvement s gain ed by maki ng
fewer calls to the operating system against your ability to
find errors quickly. With a higher monitor sample rate,
Data Integrator collects more data before calling the
operating system to open the file: performance impro ves.
However, with a higher monitor rate, more time passes
before you are able to see any errors.
Select this check box to print all trace messages to the
trace log file for the current Job Server. (For more
information on log files, see “Log” on page 120)
Selecting this option overrides the trace properties set
on the Trace t ab.
automatic recovery feature. When enabled, Data
Integrator saves the results from completed steps and
allows you to resume failed jobs. You cannot enable the
automatic recovery feature when executing a job in data
scan mode.
See “Automatically recovering jobs” on page 437 of the Data Integrator Designer Guide for information about
the recovery options.
This property is only available as a run-time property. It
is not available as a default property.
(Batch Job only) Select this check box to resume a
failed job. Data Integrator retrieves the results from any
steps that were previously executed successfully and
re-executes any other steps.
This option is a run-time property. This option is not
available when a job has not yet been executed or when
recovery mode was disabled during the previous run.
2
Data Integrator Reference Guide31
Data Integrator Objects
2
Descriptions of objects
OptionsDescription
System
configuration
Job Server or
server group
Select the system configuration to use when executing
this job. A system configuration defines a set of
datastore configurations, which define the datastore
connections. For more information, see “Creating and
managing multiple datastore configurations” on
page 112 of the Data Integrator Designer Guide.
If a system configuration is not specified, Data Integrator
uses the default datastore configuration for each
datastore.
This option is a run-time property. This option is only
available if there are system configurations defined in
the repository.
Select the Job Server or server group to execute this
job. A Job Server is defined by a host name and port
while a server group is defined by its name. The list
contains Job Servers and server groups linked to the
job’s repository.
For an introduction to server groups, see “Using Server
Groups” on page 41 of the Data Integrator Administrator
Guide.
When selecting a Job Server or server group, remember
that many objects in the Designer have options set
relative to the Job Server’s location. For example:
•Directory and file names for source and target files
•Bulk load directories
Trace properties
Use trace properties to select the information that Data Integrator monitors
and writes to the trace log file during a job. Data Integrator writes trace
messages to the trace log associated with the current Job Server and writes
error messages to the error log associated with the current Job Server.
To set trace properties, click the
click Yes in the Value list, and click OK. To turn a trace off, select the trace,
No in the Value list, and click OK.
click
32Data Integrator Reference Guide
Trace tab. To turn a trace on, select the trace,
Data Integrator Objects
Descriptions of objects
You can turn several traces on and off.
TraceDescription
RowWrites a message when a transform imports or exports a
row.
SessionWrites a message when the job description is read from
the repository, when the job is optimized, and when the
job runs.
Work FlowWrites a message when the work flow description is read
from the repository, when the work flow is optimized,
when the work flow runs, and when the work flow ends.
Data FlowWrites a message when the data flow starts, when the
data flow successfully finishes, or when the data flow
terminates due to error.
This trace also reports when the bulk loader starts, any
bulk loader warnings occur, and when the bulk loader
successfully completes.
TransformWrites a message when a transform starts, completes, or
terminates.
Custom
Transform
Writes a message when a custom transform starts and
completes successfully.
2
Data Integrator Reference Guide33
Data Integrator Objects
2
Descriptions of objects
TraceDescription
Custom Function Writes a message of all user invocations of the
AE_LogMessage function from custom C code.
SQL FunctionsWrites data retrieved before SQL functions:
•Every row retrieved by the named query before the
SQL is submitted in the key_generation function
•Every row retrieved by the named query before the
SQL is submitted in the lookup function (but only if
PRE_LOAD_CACHE is not specified).
•When mail is sent using the mail_to function.
SQL Transforms Writes a message (using the Table_Comparison
transform) about whether a row exists in the target table
that corresponds to an input row from the source table.
The trace message occurs before submitting the query
against the target and for every row retrieved when the
named query is submitted (but only if caching is not
turned on).
SQL ReadersWrites the SQL query block that a script, query transform,
or SQL function submits to the system. Also writes the
SQL results.
34Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
TraceDescription
SQL LoadersWrites a message when the bulk loader:
•Starts
•Submits a warning message
•Completes successfully
•Completes unsuccessfully , if the Clean up bulk loader
directory after load
Additionally, for Microsoft SQL Server and Sybase ASE,
writes when the SQL Server bulk loader:
option is selected
•Completes a successful row submission
•Encounters an error
This instance reports all SQL that Data Integrator submit s
to the target database, including:
•When a truncate stm command executes if the
Delete data from table be fore loa di ng option is
selected.
•Any parameters included in PRE-LOAD SQL
commands
•Before a batch of SQL statements is submitted
•When a template table is created (and also dropped,
Drop/Create option is turned on)
if the
•When a delete stm command executes if auto
correct is turned on (Informix environment only).
Optimized
Dataflow
Tables Writes a message when a table is created or dropped.
Scripts and
Script Functions
For Business Objects consulting and technical support
use.
The message indicates the datastore to which the
created table belongs and the SQL statement used to
create the table.
Writes a message when Data Integrator runs a script or
invokes a script function. Specifically, this trace links a
message when:
•The script is called. Scripts can be started any level
from the job level down to the data flow level.
Additional (and separate) notation is made when a
script is called from within another script.
•A function is called by the script.
•The script successfully completes.
2
Data Integrator Reference Guide35
Data Integrator Objects
2
Descriptions of objects
TraceDescription
Access Server
Communication
Writes messages exchanged between the Access Server
and a service provider, including:
•The registration message, which tells the Access
Server that the service provider is ready
•The request the Access Server sends to the service
to execute
•The response from the service to the Access Server
•Any request from the Access Server to shut down
Trace Parallel
Execution
Writes messages describing how data in a data flow is
parallel processed.
36Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Catch
Class
Single-use
Access
With a work flow diagram in the workspace, click the catch icon in the tool
palette.
Description
A catch is part of a serial sequence called a try/catch block. The try/catch
block allows you to specify alternative work flows if errors occur while Data
Integrator is executing a job. Try/catch blocks “catch” groups of errors, apply
solutions that you provide, and continue execution.
For each catch in the try/catch block, specify the following:
•One exception or group of exceptions that the catch handles.
To handle more than one exception or group of exceptions, add more
catches to the try/catch block.
•The work flow to execute if the indicated exception occurs.
Use an existing work flow or define a work flow in the catch editor.
If an exception is thrown during the execution of a try/catch block, and if no
catch is looking for that exception, then the exception is handled by normal
error logic.
Do not reference output variables from a try/catch block in any subsequent
steps if you are using (for batch jobs only) the automatic recovery feature.
Referencing such variables could alter the results during automatic recovery.
Also, try/catch blocks can be used within any real-time job component.
However, try/catch blocks cannot straddle a real-time processing loop and the
initialization or clean up component of a real-time job.
Catches have the following attribute:
2
AttributeDescription
NameThe name of the object. This name appears on the
object in the diagram.
Data Integrator Reference Guide37
Data Integrator Objects
2
Descriptions of objects
The following table describes exception groups and errors passed as
exceptions.
ExceptionNumberDescription
Catch All ExceptionsAllCatch All Exceptions
Parser Errors1Parser errors
Resolver Errors2Resolver errors
Execution Errors5Internal errors that occur during the
execution of a data movement
specification
Job initialization50101Initialization for a job failed
Job cleanup50102Job cleanup failed
Job unknown50103Unknown job error
Job failure50104Session failure
Work flow initialization50201Initialization for a work flow failed
Work flow cleanup50202Work flow cleanup failed
Work flow unknown50203Unknown work flow error
Work flow failure50204Work flow failure
Function initialization50301Initialization of the function failed
Function cleanup50302Function cleanup failed
Function unknown50303Unknown function error
Function failure50304Function failure
Step failure50305Step failure
System function failure50306System function execution failure
(function returned an error status)
System function initialization50307System function startup or launch failure
Data flow initialization50401Initialization for a data flow failed
Dataflow open50402Cannot open data flow
Dataflow close50403Data flow close failed
Dataflow cleanup50404Data flow cleanup failed
Data flow unknown50405Data flow unknown
Dataflow failure50406Data flow failure
Message bad50407Bad message error
Transform initialization50501Initialization for transform failed
38Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
ExceptionNumberDescription
Transform open50502Cannot open transform
Transform close50503Cannot close transform
Transform cleanup50504Cannot clean up transform
Transform unknown50505Unknown transform error
Transform failure50506Transform failure
OS unknown50601Unknown OS error
OS GetPipe50602OS GetPipe error
OS ReadPipe50603OS ReadPipe error
OS WritePipe50604OS WritePipe error
OS ClosePipe50605OS ClosePipe error
OS CreatePipe50606OS CreatePipe error
OS RedirectPipe50607OS RedirectPipe error
OS DuplicatePipe50608OS DuplicatePipe error
OS Stdin/stdout restore50609OS stdin/stdout restore error
OS CloseProcess50611OS CloseProcess error
OS CreateProcess50612OS CreateProcess error
OS ResumeProcess50613OS ResumeProcess error
OS SuspendProcess50614OS Suspend process error
OS TerminateProcess50615OS TerminateProcess error
Internal thread initialization50701Internal error: thread initialization
Internal thread join50702Internal error: thread join
Internal thread start50703Internal error: thread start
Internal thread resume50704Internal error: thread resume
Internal thread suspend50705Internal error: thread suspend
Internal mutex acquire50711Internal error: acquire mutex
Internal mutex release50712Internal error: release mutex
Internal mutex recursive acquire50713Internal error: acquire recursive mutex
Internal mutex recursive release50714Internal error: mutex recursive release
Internal trap get error50721Internal error: trap get
Internal trap set50722Internal error: trap set
Internal trap make50723Internal error: trap make
2
Data Integrator Reference Guide39
Data Integrator Objects
2
Descriptions of objects
ExceptionNumberDescription
Internal condition wait50731Errors encountered while processing
files
Internal condition signal50732Cannot open file
Internal queue read50741Mismatch of the position of column
Internal queue write50742Premature end of row
Internal queue create50743Cannot move file pointer to the
beginning of the file
Internal queue delete50744Null row is encountered during read
String conversion overflow50800String conversion overflow
Validation not done 50801Validation not complete
Conversion error50802Conversion error
Invalid data values51001Invalid data values
Database Access Errors7Generic Database Access Errors
Unsupported expression70101Unsupported expression
Connection broken70102Connection broken
Column mismatch70103Column mismatch
Microsoft SQL Server70201Microsoft SQL Server error
Oracle server70301Oracle server error
ODBC70401ODBC error
Sybase ASE server70601Sybase ASE SQL error
Sybase ASE server operation70602Sybase ASE operation error
File Access Errors8Errors accessing files through file
formats
Open80101Cannot open file
Read NULL row80102Null row is encountered during read
Premature end of row80103Premature end of row
Position column mismatch80104Mismatch the position of column
LSEEK BEGIN80105Cannot move file pointer to the
beginning of the file
Repository Access Errors10Errors accessing the D
Microsoft Connection Errors12Errors connecting to the Microsoft SQL
Server
MSSQL Server Initialization120201Initialization for a Microsoft SQL Server
failed
MSSQL Login Allocation120202Initialization for a Microsoft SQL Server
failed
MSSQL Login Connection120203Connection to Microsoft SQL Server
failed
MSSQL Database Context120204Failed to switch context to a Microsoft
SQL Server database
ODBC Allocate environment120301ODBC Allocate environment error
Oracle connection120302Oracle connection failed
ODBC connection120401ODBC connection failed
Sybase ASE server context allocation120501Sybase ASE server failed to allocate
global context
Sybase ASE server initialization 120502Sybase ASE server failed to initialize
CTLIB
Sybase ASE user data configuration120503Sybase ASE server failed to configure
user data
Sybase ASE connection allocation120504Sybase ASE server failed to allocate
connection structure
Sybase ASE login connection120505Sybase ASE connection failed
Predefined Transforms Errors13Predefined Transforms Errors
Column list is not bound130101Primary column information list is not
bound
Key generation130102Transform Key_Generation error
Options not defined130103Transform options are not defined
ABAP Generation Errors14ABAP generation errors
ABAP syntax error140101ABAP syntax error
R/3 Execution Errors15R/3 execution errors
R/3 RFC open failure150101R/3 RFC connection failure
R/3 file open failure150201R/3 file open failure
R/3 file read failure150202R/3 file read failure
R/3 file close failure150203R/3 file close failure
2
Data Integrator Reference Guide41
Data Integrator Objects
2
Descriptions of objects
ExceptionNumberDescription
R/3 file open failure150301R/3 file open failure
R/3 file read failure150302R/3 file read failure
R/3 file close failure150303R/3 file close failure
R/3 connection open failure150401R/3 connection open failure
R/3 system exception150402R/3 system exception
R/3 connection broken150403R/3 connection broken
R/3 connection retry150404R/3 connection retry
R/3 connection tranid missing150405R/3 connection transaction ID missing
R/3 connection has been executed150406R/3 connection has been executed
R/3 connection memory low150407R/3 connection memory low
R/3 connection version mismatch150408R/3 connection version mismatch
R/3 connection call not supported150409R/3 connection call not supported
Email Errors16Email errors
System Exception Errors17System exception errors
Engine Abort Errors20Engine abort errors
42Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Conditional
Class
Single-use
Access
With a work flow diagram in the workspace, click the conditional icon in the
tool palette.
Description
A conditional implements if/then/else logic in a work flow.
For each conditional, specify the following:
•If: A Boolean expression defining the condition to evaluate.
The expression evaluates to TRUE or FALSE. You can use constants,
functions, variables, parameters, and standard operators to construct the
expression. For information about expressions, see Chapter 3: Smart
Editor.
Note: Do not put a semicolon (;) at the end of your expression in the
box.
•Then: A work flow to execute if the condition is TRUE.
•Else: A work flow to execute if the condition is FALSE.
This branch is optional.
The
Then and Else branches of the conditional can be any steps valid in a
work flow, including a call to an existing work flow.
Conditionals have the following attribute:
2
If
AttributeDescription
NameThe name of the object. This name appears on the
object in the diagram.
Data Integrator Reference Guide43
Data Integrator Objects
2
Descriptions of objects
COBOL copybook file format
Class
Reusable
Access
In the object library, click the Formats tab.
Description
A COBOL copybook file format describes the structure defined in a COBOL
copybook file (usually denoted with a .cpy extension). Y ou store templates for
file formats in the object library. You use the templates to define the file format
of a particular source in a data flow.
The following tables describe the Import (or Edit) COBOL copybook dialog
box options and the Source COBOL copybook Editor options.
Import or Edit COBOL copybook format options
The Import (or Edit) COBOL copybook format dialog boxes include options on
the following tabs:
•Format
•Data File
•Data Access
44Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Format
The Format tab defines the parameters of the COBOL copybook format.
Table 2-1 :Format tab
2
Data file
option
File nameType or browse to the COBOL copybook file name (usually
Expand
occurs
Description
has a .cpy extension). This file contains the schema definition.
Specifies the way to handle OCCURS groups. These groups
can be imported either:
•with each field within an OCCURS group getting a
sequential suffix for each repetition: fieldname_1,
fieldname _2, etc. (unrolled view), or
•with each field within an OCCURS group appearing only
once in the copybook’s schema (collapsed view). For a
collapsed view, the output schema matches the OCCURS
group definition, and for each input record there will be
several output records.
If a copybook contains more than one OCCURS group, you
must check this box.
Ignore
redefines
Source
format
Determines whether or not to ignore REDEFINES clauses.
The format of the copybook source code. Options include:
•Free — All characters on the line can contain COBOL
source code.
•Smart mode — Data Integrator will try to determine
whether the source code is in Standard or Free format; if
this does not produce the desired result, choose the
appropriate source format (standard or free) manually for
reimport.
•Standard — The traditional (IBM mainframe) COBOL
source format, where each line of code is divided into the
following five areas: sequence number (1-6), indicator
area (7), area A (8-11), area B (12-72) and comments (73-
80).
Source
codes [start]
Source
codes [end]
Defines the start column of the copybook source file to use
during the import. Typical value is 7 for IBM mainframe
copybooks (standard source format) and 0 for free format.
Defines the end column of the copybook source file to use
during the import. Typical value is 72 for IBM mainframe
copybooks (standard source format) and 9999 for free format.
Data Integrator Reference Guide45
Data Integrator Objects
2
Descriptions of objects
Data File
The Data File tab defines the parameters of the data file.
Table 2-2 :Data File tab
Data file
option
DirectoryType or browse to the directory that contains the COBOL
File nameType or browse to the COBOL copybook data file Name. You
TypeSpecifies the record format—fixed, variable, or undefined:
Description
copybook data file to import.
If you include a directory path here, then enter only the file
name in the Name field.
can use variables or wild cards (* or ?).
If you leave Directory blank, then type a full path and file
name here.
•Fixed(F)
•Fixed-Blocked(FB)
•Variable(V)
•Variable-Blocked(VB)
•Variable-Spanned(VS)
•Variable-Blocked-Spanned(VBS)
•Undefined(U) (specific to the program that created the
file)
Use record length
Include or
exclude
TypeSpecifies whether variable-length records of the data file
Specifies whether the length information in the beginning of a
record is a part of total record length.
contain information about the length of each record. The
possible values are:
•2-byte integer
•2-byte followed by 0x0000 (integer followed by two zero
bytes)
•4-byte integer
•None — No length information in the beginning of each
record)
Other
Record size Defines fixed record length in bytes. All records in the file have
this length (padded, if necessary).
46Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
2
Data file
option
Record
trailer length
Has record
mark
Integer
format
Description
Specifies the length of extra character padding in bytes at the
end of each record.
Defines whether there is an extra byte in the beginning of each
record's data.
Describes how the existing data file stores binary data:
•Big endian — the most significant byte comes first
•Little endian — the least significant byte comes first
EncodingSpecifies the character encoding of character data in the data
file.
Skip firstDefines the number of data records to skip before starting to
process the file.
Read totalDefines the number of records to read and process.
Data Integrator Reference Guide47
Data Integrator Objects
2
Descriptions of objects
Data Access
The Data Access tab specifies how Data Integrator accesses the data file. If
both check boxes are cleared, Data Integrator assumes the data file is on the
same computer as the Job Server.
Table 2-3 :Data Access tab
Data access
option
FTPSelect to use FTP to access the data file.
HostType the computer (host) name, fully qualified domain name,
UserType the FTP user name.
PasswordType the FTP user password.
DirectoryType or browse to the directory that contains the COBOL
File nameType or browse to the COBOL copybook data file Name. You
CustomSelect to use a custom executable to access the data file.
ExecutableType the name of the program to read data file.
UserType the user name.
PasswordType the password.
ArgumentsInclude any custom program arguments.
Description
or IP address of the computer where the data file resides.
copybook data file to import.
If you include a directory path here, then enter only the file
name in the Name field.
can use variables or wild cards (* or ?).
If you leave Directory blank, then type a full path and file
name here.
COBOL copybook source
The source editor includes the following COBOL copybook options on the
following tabs:
•Source
•Field clauses
•Data File
•Data Access
48Data Integrator Reference Guide
Source
Table 2-4 :Source tab
Data Integrator Objects
Descriptions of objects
2
Source
option
Make portMakes the source table an embedded data flow port. For more
Performance
Join rankIndicates the rank of the source relative to other tables and files
CacheIndicates whether Data Integrator should read the required
Description
information, see Chapter 11, “Embedded Data Flows,” in the Data Integrator Designer Guide.
in the data flow when creating a join. Data In tegrator joins
sources with higher join ranks before joining sources with lower
join ranks. For more information, see “Join ordering” on page 27
of the Data Integrator Performance Optimization Guide.
NOTE: Must be a non-negative integer. When set to its
default value (zero), Data Integrator determines join order.
data from the source and load it into memory. Because an
inner source of a join must be read for each row of an outer
source, you might want to cache a source when it is used as
an inner source in a join and when the data retrieved will fit
into available memory.
There are two options:
•Yes: The source is always cached unless it is the outer-
most source in a join.
•No: The source is never cached.
Error handling
Log
data
conversion
warnings
Maximum
warnings to
log
Determines whether to include data-type conversion warnings
in the Data Integrator error log. Defaults to Yes.
If Log data conversion warnings is enabled, you can limit how
many warnings Data Integrator logs. Defaults to {no limit}.
Field clauses
The Field clauses tab displays the attributes for a selected column.
Data Integrator Reference Guide49
Data Integrator Objects
2
Descriptions of objects
Table 2-5 :Field clauses tab
Field clauses optionDescription
Possible valuesEnter values here to force Data Integrator
to only process rows that contain the
specified value(s). Separate multiple values
with the pipe character (|). Y ou can click the
ellipses button to open the smart editor; for
details on how to use the smart editor, see
Chapter 3, “Smart Editor,” in the Data
Integrator Reference Guide.
LevelThe level number (01-50) assigned to the
field in the source record definition.
Original nameThe name of the field in the copybook.
Original pictureThe PICTURE clause of the field in the
copybook.
Original usageThe USAGE clause of the field in the
copybook.
Min occursMinimum number of occurrences for this
field (if this field is a part of an OCCURS
group).
Max occursMaximum number of occurrences for this
field (if this field is a part of an OCCURS
group).
Occurs depending onSpecifies the repetition counter field name
for the ODO (OCCURS DEPENDING ON).
RedefinesSpecifies the name of another field that this
one REDEFINES.
Sign separateSpecifies whether the sign is stored
separately from the field’s value.
Sign positionSpecifies whether the sign is LEADING or
TRAILING.
Multiply bySpecifies whether the field needs to be
scaled (multiplied or divided by a certain
number). For example, if the field's
PICTURE clause is 9(5)P(3), the value of
the field from the data file will be multiplied
by 1000.
Data File
See “Data File” on page 46.
50Data Integrator Reference Guide
Data Access
See “Data Access” on page 48.
Data Integrator Objects
Descriptions of objects
2
Data Integrator Reference Guide51
Data Integrator Objects
2
Descriptions of objects
Data flow
Class
Reusable
Access
•In the object library, click the Data Flows tab.
•With a work flow diagram in the workspace, click the data flow icon in the
tool palette.
Description
A data flow extracts, transforms, and loads data.
You can define parameters to pass values into the data flow. You can also
define variables for use inside the data flow.
When Data Integrator executes data flows, it optimizes the extract, transform,
and load requirements into commands to the DBMS and commands executed
internally. Where it can, Data Integrator runs these operations in parallel.
By definition, a data flow can contain the following objects:
SourcesFiles, tables, XMl files, XML messages (real-time jobs only),
documents, or pre-defined template tables
Targets Files, tables, XML files, XML messages (real-time jobs only),
outbound messages, documents, XML template, or template tables
TransformsQuery is the most commonly used transform
You can view the SQL code Data Integrator generates for table sources in
data flows and improve your data flow design accordingly . See “Viewing SQL”
on page 34 of the Data Integrator Performance Optimization Guide.
Data flows have several built-in properties.
AttributeDescription
NameThe name of the object. This name appears on the
DescriptionYour description of the data flow.
If you delete a data flow from the object library, calls to the object are replaced
with an icon indicating that the calls are no longer valid in the workspace.
52Data Integrator Reference Guide
object in the object library and in the calls to the
object.
Data Integrator Objects
Descriptions of objects
Executing jobs only once
You can ensure that a job executes a data flow only one time by selecting the
Execute only once check box on the data flow Properties window. When you
select this check box, Data Integrator executes only the first occurrence of the
data flow and skips subsequent occurrences of it in the job. You might use
this feature when developing complex jobs with multiple paths, such as those
containing try/catch blocks or conditionals, and you want to ensure that Data
Integrator executes a particular data flow only once. Before selecting the
Execute only once option, note that:
•If you design a job to execute the same Execute only once data flow in
parallel flows, Data Integrator only executes the first occurrence of that
data flow and you cannot control which one Data Integrator executes first.
Subsequent flows wait until Data Integrator processes the first one. The
engine provides a wait message for each subsequent data flow. Since
only one
engine skips subsequent data flows and generates a second trace
message for each, “Data flow n did not run more than one time. It is an
execute only once flow.”
Execute only once data flow can execute in a single job, the
•The Execute only onc e data flow option overrides the Recover as a unit
work flow option and the
For example, if you design a job to execute more than one instance of the
same
Execute only once data flow and execute the job in recovery mode,
if the job fails Data Integrator checks to see if the data flow ran
successfully. If any instance ran successfully, Data Integrator displays the
following trace message, “Data flow n recovered successfully from
previous run.” If no instance ran successfully, Data Integrator executes
the data flow in the next run and skips subsequent instances of that data
flow.
Enable recovery job option.
2
Parallel processing
You can run certain transforms and functions in parallel by entering a number
Degree of parallelism box on your data flow Properties window. When
in the
you drop a transform into the data flow and a function into each transform, the
number you enter in the
instances that can be generated for each transform or function in the data
flow. For more information, see “Degree of parallelism” on page 66 of the
Data Integrator Performance Optimization Guide.
Degree of parallelism box is the maximum number of
Data Integrator Reference Guide53
Data Integrator Objects
2
Descriptions of objects
Datastore
Class
Reusable
Access
In the object library, click the Datastores tab.
Description
A datastore provides a connection to a data source such as a database.
Through the datastore connec tio n, Da ta Integrator can import descript io ns of
the data source such as its metadata. When you specify tables as source s o r
targets in a data flow, Data Integrator uses the datastore to determine ho w to
read data from or load data to those tables. In addition, some transforms and
functions require a datastore name to qualify the t ables they access.
Datastores have the following properties:
PropertyDescription
NameThe name of the object. This name appears on the object in
the object library and in the calls to the object. You cannot
change the name of a datastore after creation.
DescriptionText that you enter to describe and document the datastore.
Date_created The date that you created the datastore. You cannot change
this value.
Note: If you delete a datastore from the object library, you must remove
references to the datastore from the following locations:
•Source or target tables using this datastore in your diagrams
•The lookup and key_generation functions and Key_Generation,
History_Preserving, Table_Comparison, and SQL transform references
Datastore Editor
The Datastore Editor consists of several dialog boxes.
Open the Datastore Editor by right-clicking the white space on the object
library’s Datastore tab and selecting New or right-clicking on the datastore
name and selecting Edit.
•Edit invokes the Edit Datastore DatastoreName dialog box
•New invokes the Create New Datastore dialog box
54Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
The top part of the dialog box displays the options that are minimally required
to create the datastore.
Initially only two options appear on this realizable dialog box: Datastore Name and Datastore type. When you select a datastore type, the dialog box
displays other options relevant to that type. The combination of Datastore type and Database type determine all available options.
There are three categories of datastore types:
•Database datastores allow you to connect to supported databases.
•Adapter datastores allow you to connect to adapters.
•Application datastores, such as PeopleSoft and JDE One World allow
you to connect to applications that run on databases. You can select
these applications by name from the Datastore type list.
For example, if you select database as the Datastore type and Oracle as the
Database type, the following options appear:
2
Data Integrator Reference Guide55
Data Integrator Objects
2
Descriptions of objects
Data Integrator supports changed-data capture (CDC) with Oracle databases,
so in this case, the Designer displays the Enable CDC check box.
The Enable CDC option is available only when you create a new datastore.
After you save a datastore, or when editing a datastore, the dialog box
disables the Enable CDC check box. Also, although a database datastore
may have multiple configurations of different database types, if you enable
CDC for a datastore all configurations must use the same database type.
Click Advanced >> to expand the dialog box. The dialog box displays a grid
of additional datastore options.
56Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
2
When you click the Advanced button, the dialog box expands and the button
text changes to Advanced <<. Y ou can click Advanced << to hide the grid. If
the Database type supports multiple configurations, the dialog box also
enables the Edit… button.
The option list is organized into groups that you can expand or collapse to
improve readability. Each cell represents the value for the configuration
option. If the values are from a closed set, the cell becomes a combo box
when you click it. If the option values must be entered, the cell becomes an
edit box when you click it.
Click the Edit… button to add, edit, or remove configurations. The
Configurations for Datastore DatastoreName dialog box opens.
Data Integrator Reference Guide57
Data Integrator Objects
2
Descriptions of objects
The Configurations for Datastore DatastoreName dialog box provides a tool
bar which includes commands to add, edit, and remove configurations. To
save a configuration, click Apply.
This dialog box displays datastore configurations as column headings with
datastore options in the left column. Each row represents a configuration
option. Every datastore has at least one configuration. Different options
appear depending upon the selected datastore type and (if applicable)
database type and version. The options appear under group headings (for
example, Connection, General, Locale).
With database datastores, you can use multiple configurations to minimize
your effort in porting existing jobs from one database type and version to
another. The Datastore Editor supports this by allowing you to choose a
configuration from which to copy options from when you create another.
When you add subsequent configurations, Data Integrator modifies the
language of data flows that contain table targets and SQL transforms in the
datastore. It adds the target options and SQL transform text to additional
datastore configurations based on the target options and SQL transform text
that you defined for an existing configuration.
58Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
This functionality performs in the following ways:
•If the new configuration has the same or newer database type and
version as the old configuration, then Data Integrator automatically uses
the existing SQL transform and target table editor values.
•If the database type and version does not already exist (or if the version
is older than any existing configuration), you can use the values from an
existing database type and version by selecting them from the Use values from box.
•If you select the Restore values if they already exist check box, then
when the new configuration is created, Data Integrator looks to see if
SQL transform and target table editor values already exist for the new
database.
For example, suppose you are working in a multi-user situation and have a
local datastore with configurations for Oracle 9i and SQL Server 2000. You
also have existing data flows that use target tables or SQL transforms from
this datastore. Y ou then delete Oracle 9i (perhaps because you checked out a
different version of the datastore from the central repository). Later, you want
to add an Oracle 9i configuration to this datastore.
When a version is deleted in this case, Data Integrator removes the
configuration, but not the target table and SQL transform values. If you select
Restore values if they already exist, then when you create the new
configuration, Data Integrator looks to see if the values already exist for the
database. If these values are not found, then the Designer uses the values
specified in the Use values from box.
When there are multiple configurations of different database types, the rows
show the options for all configurations.
2
Data Integrator Reference Guide59
Data Integrator Objects
2
Descriptions of objects
If a property does not apply to a configuration, the cell displays N/A in gray
and does not accept input. Cells that correspond to a group header also do
not accept input, and are marked with hashed gray lines.
The functionality of this dialog box is a super set of the Datastore Editor.
When you open this dialog box, there will always be at least one configuration
that reflects the values from the Datastore Editor. This configuration is the
default configuration. In addition, the grid shows other configurations as
additional columns.
60Data Integrator Reference Guide
From left to right, the toolbar buttons are:
Button Button nameDescription
Create New
Configuration
Adds a new configuration with no values.
Data Integrator Objects
Descriptions of objects
2
Duplicate
Configuration
Rename
Configuration
Delete
Configuration
Sort
Configurations:
Ascending
Sort
Configurations:
Descending
Move Default to
First
Create New
Alias
Delete AliasRemoves the selected alias name for the datastore.
Creates a new configuration with identical settings as the
selected configuration. The new configuration name is
“<old_name>_Copy_X”, where X is a number starting from 1 that
will create a unique name.
Puts the input focus on the column header where you can edit the
name of the selected configuration.
Removes the configuration from the datastore and its column
from the grid.
Arranges the configurations by their names in ascending order.
The arrangement is sensitive to the computer's system locale.
Arranges the configurations by their names in descending order.
The arrangement is sensitive to the computer's system locale.
Moves the default configuration to the first column in the list.
Does not change the order of other columns.
Adds a new alias name for the datastore. To map individual
configurations to an alias, enter the real owner name of the
configuration in the grid.
Expand All
Categories
Collapse All
Categories
Show Additional
Details
Navigation boxThis list contains the names of all configurations. Selecting a
Opens all the nodes so that every configuration property is
visible.
Closes all the nodes so that every configuration property is
hidden.
This is a toggle to show additional datastore options on the dialog
box: Database type, Number of Configurations, and CDC status.
name from this list will (if necessary) scroll the configuration into
view and highlight the configuration name in the grid.
Data Integrator Reference Guide61
Data Integrator Objects
2
Descriptions of objects
These commands (except for the Navigation box) also appear on a shortcut
menu when you right-click the grid.
You can also manage configurations by directly manipulating the grid.
Double-click the column header to edit the configuration name. Click the
header and press Delete to delete the selected configuration. When there is
more than one configuration, click and drag the header left or right to
rearrange the order.
Data Integrator saves the configurations in the same sequence as shown in
the Configuration dialog box.
All configurations have an Alias category. After you create an alias (such as
alias1, alias2), navigate horizontally to each configuration and define the
owner name that the alias name maps to.
Note that Owner names are not labeled in the configuration dialog box.
owner name
When you delete an alias name, the operation applies to the datastore (all
configurations). Data Integrator removes the selected row.
Because there can only be one default configuration (used to browse, search,
and import metadata), if you select Yes for one configuration, the dialog box
sets the others to No. Also, you cannot set a Yes value to a No. Designer will
ignore the change and display a message box instructing you to set Yes on
another configuration instead.
After you click Apply to save your new configuration, Data Integrator copies
the existing SQL transform and target table editor values, and displays a
report of the modified objects in a popup dialog box and in the Output window .
The report contains the following information:
62Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
•Names of the data flows where language was modified
•Objects in the data flows that were affected
•Types of the objects affected (table target or SQL transform)
•Usage of the objects (source or target)
•Whether the objects have a bulk loader
•Whether the bulk loader option was copied
•Whether there were previous values
•Whether the previous values were restored
You can use this report as a guide to manually change the values for options
of targets and SQL transforms, as needed. Using the pop-up dialog box, you
can sort the results by clicking on the header and save the output to a file.
The popup appears after each newly added configuration.
Data Integrator also clears and displays the results in the Output window af ter
each newly added configuration. Because the Datastore Editor dialog boxes
are modal, you cannot see the entire Output window or manipulate it.
However, you can double-click one of the objects on this report and to view
the data flow.
Importing database links
This section provides the steps you need to follow to import and configure a
database link in the Designer. For an introduction to Data Integrator support
for database links, see “Database link support for push-down operations
across datastores” on page 77 of the Data Integrator Performance
Optimization Guide.
2
To link a target datastore to a source datastore using a database link
1.From the Datastores tab in the object library , right-click a target datastore
and select Edit.
Data Integrator Reference Guide63
Data Integrator Objects
2
Descriptions of objects
If the database type supports database links, the list of configuration
options includes the Linked Datastores option:
Note: The Datastore Editor allows you to edit database links on target
datastores using the default configuration only. If a target datastore has
multiple configurations (for example: Config1, Config2, and Config3),
make Config2 the default configuration before you attempt to import or
edit links that apply to Config2.
2. Click the Linked Datastores label.
The Add Linked Datastore dialog opens.
3.From the Add Linked Datastore dialog, select the datastore that your
target datastore will be linked to based upon the settings in the database
link you want to import.
64Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
For example if your target datastore is DS_Emp and the database link
you will import associates the database emp with sales, select DS_Sales.
The datastores in the list box have database types that Data Integrator
supports for linked datastores.
Note: The Datastore Editor allows only one database link between a
target datastore and a source datastore pair. So if target datastore B
already has a link to source datastore A, you cannot import another
database link that associates datastore B with datastore A.
4.Click OK.
5.Notice the Datastore Editor dialog displays the datastore that you
selected.
2
6.Select the list button to the right of
The Database Link dialog opens.
7.To link to a datastore or to change the existing link, select Use the
database link.
Note: To remove an existing link, select Do not link.
8.Select a database link from the list that Data Integrator reads from the
default configuration connection of the target datastore you are editing.
This list box contains links that you previously defined on the DBMS.
9.Select the source datastore configuration that you want to use with this
database link.
Not Linked or double-click the cell.
Data Integrator Reference Guide65
Data Integrator Objects
2
Descriptions of objects
10. (Optional) Select Details to view additional information about the links or
to test them.
The check mark indicates the link to use. If you use the Details dialog,
click OK when you are finished.
11. From the Database Link dialog, click OK.
Database datastores
You can define datastores so that Data Integrator can read from and write to
the following types of databases:
•Attunity Connector (use for mainframe systems)
•DB2
•IBM Connector (use for mainframe systems)
•Informix
•Memory
•Microsoft SQL Server
•ODBC
•Oracle
•Sybase ASE
•Teradata
Each database requires its own connection information in the datastore
definition. For more information on configuring datastores, see
“Datastores,” in the Data Integrator Designer Guide.
The following tables describe the datastore connection information and
options specific to each database.
Chapter 5,
66Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Note: The Enable CDC option is available with a subset of the databases.
When the Enable CDC option is checked, the options in the following group
headings do not display because a CDC datastore is read-only and you can
only use it as a source: General, Bulk Loader, and FTP.
Table 2-6 Attunity Connector
Attunity option Possible values Description
Main window
Data sourceRefer to the
requirements of
your database
Host locationComputer name,
fully qualified
domain name, or
IP address
PortPositive integerType the port number for the Attunity server.
Attunity
workspace
User nameAlphanumeric
PasswordAlphanumeric
Enable CDCSelect to enable changed data capture for this datastore.
General (these options do not appear for CDC datastores)
Rows per
commit
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Refer to the
requirements of
your database
characters and
underscores
characters,
underscores,
and punctuation
Positive integer Enter the maximum number of rows loaded to a target
Directory path or
click Browse
Type the Attunity data source name(s) as defined in
Attunity Studio. Separate multiple data source names
with semicolons.
Type the name of the Attunity server computer (host).
Type the worksp ace name under which the data sources
are defined in Attunity Studio.
Type the user name of the account through which Data
Integrator accesses the database.
Type the user’s password.
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Enter the location of overflow files written by target tables
in this datastore.
2
Data Integrator Reference Guide67
Data Integrator Objects
2
Descriptions of objects
Table 2-7 DB2
DB2 optionPossible
values
Main window
Database versionDB2 UDB 6.1
DB2 UDB 7.1
DB2 UDB 7.2
DB2 UDB 8.x
Data sourceRefer to the
requirements of
your database
User nameAlphanumeric
characters and
underscores
PasswordAlphanumeric
characters,
underscores,
and punctuation
Enable CDCSelect to enable changed data capture for this
General
Rows per commitPositive integer Enter the maximum number of rows loaded to a
Bulk loader directory Directory path
or click Browse
Overflow file
directory
Directory path
or click Browse
Description
Select the version of your DB2 client. This is the
version of DB2 that this datastore accesses.
Type the data source name defined in DB2 for
connecting to your database.
If you are going to use the Auto correct load feature
for DB2 targets, be sure that your data source allows
your user name to create or replace stored
procedures.
Enter the user name of the account through which
Data Integrator accesses the database.
Enter the user’s password.
datastore.
target table before saving the data. This value is the
default commit size for target tables in this datastore.
You can overwrite this value for individual target
tables.
Enter the location where command and data files are
written for bulk loading. For Solaris systems, the path
name must be less than 80 characters.
Enter the location of overflow files written by target
tables in this datastore.
68Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
2
DB2 optionPossible
values
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Linked Datastores (Click here to create)
Datastore NameAlphanumeric
characters and
underscores or
blank
Bulk loader
Bulk loader user
name
Bulk loader
password
DB2 server working
directory
Alphanumeric
characters and
underscores or
blank
Alphanumeric
characters,
underscores,
and
punctuation, or
blank
Directory path
or click Browse
Description
The name of a datastore to which you linked the
current datastore configuration in preparation to
import a database link. See “Importing database
links” on page 63.
The user name Data Integrator uses when loading
data with the bulk loader option. For bulk loading, you
might specify a different user name. For example,
specify a user who has import and load permissions.
The password Data Integrator uses when loading
with the bulk loader option.
The working directory for the load utility on the
computer that runs the DB2 server. You must
complete this field whenever the DB2 server and the
Data Integrator Job Server run on separate
machines.
Data Integrator Reference Guide69
Data Integrator Objects
2
Descriptions of objects
DB2 optionPossible
values
FTP
FTP host nameComputer
name, fully
qualified
domain name,
or IP address
FTP login user name Alphanumeric
characters and
underscores, or
blank
FTP login passwordAlphanumeric
characters,
underscores,
and
punctuation, or
blank
Table 2-8 IBM Connector
Description
If this field is left blank or contains the name of the
computer (host) where the Data Integrator Job
Server resides, Data Integrator assumes that DB2
and Data Integrator share the same computer and
that FTP is unnecessary. When FTP is unnecessary,
all other FTP-related fields can remain blank. See
“Using the DB2 bulk load utility” on page 49 of the
Data Integrator Performance Optimization Guide for
a discussion about when FTP is necessary.
Must be defined to use FTP.
Must be defined to use FTP.
IBM optionPossible values Description
Main window
Data sourceRefer to the
requirements of
your database
User nameAlphanumeric
characters and
underscores
PasswordAlphanumeric
characters,
underscores,
and punctuation
70Data Integrator Reference Guide
Type the Data Source Name defined in the ODBC
Administrator for connecting to your database.
Enter the user name of the account through which Data
Integrator accesses the database.
Enter the user’s password.
Data Integrator Objects
Descriptions of objects
IBM optionPossible values Description
General
Rows per
commit
Overflow file
directory
Bulk loader
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Positive integer Enter the maximum number of rows loaded to a target
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Directory path or
click Browse
Directory path or
click Browse
Table 2-9 Informix
Enter the location of overflow files written by target tables
in this datastore.
Enter the location where command and data files are
written for bulk loading. For Solaris systems, the path
name must be less than 80 characters.
2
Informix
option
Main window
Database
version
Data sourceRefer to the
User nameAlphanumeric
PasswordAlphanumeric
Possible values Description
Informix IDS 7.3
Informix IDS 9.2
requirements of
your database
characters and
underscores
characters,
underscores,
and punctuation
Select the version of your Informix client. This is the
version of Informix that this datastore accesses.
Type the Data Source Name defined in the ODBC
Administrator for connecting to your database.
Enter the user name of the account through which Data
Integrator accesses the database.
Enter the user’s password.
Data Integrator Reference Guide71
Data Integrator Objects
2
Descriptions of objects
Informix
option
General
Rows per
commit
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Memory option Possible values Description
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Possible values Description
Positive integer Enter the maximum number of rows loaded to a target
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Directory path or
click Browse
Table 2-10 Memory
Table 2-11 Microsoft SQL Server
Enter the location of overflow files written by target tables
in this datastore.
Microsoft SQL
Server option
Main window
Database
version
Database
server name
Database name Refer to the
72Data Integrator Reference Guide
Possible values Description
Microsoft SQL
Server 7.0
Microsoft SQL
Server 2000
Computer name,
fully qualified
domain name, or
IP address
requirements of
your database
Select the version of your SQL Server client. This is the
version of SQL Server that this datastore accesses.
Enter the name of machine where the SQL Server
instance is located.
Enter the name of the database to which the datastore
connects.
Data Integrator Objects
Descriptions of objects
2
Microsoft SQL
Server option
User nameAlphanumeric
PasswordAlphanumeric
Connection
Use Windows
Authentication
General
Rows per
commit
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Linked Datastores (Click here to create)
Datastore Name Alphanumeric
Possible values Description
Enter the user name of the account through which Data
characters and
underscores
characters,
underscores,
and punctuation
No, YesSelect whether to use Windows authentication or SQL
Positive integer Enter the maximum number of rows loaded to a target
Directory path or
click Browse
characters and
underscores or
blank
Integrator accesses the database.
Enter the user’s password.
Server authentication to connect to this datastore.
Defaults to No.
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Enter the location of overflow files written by target tables
in this datastore.
The name of a datastore to which you linked the current
datastore configuration in preparation to import a
database link. See “Importing database links” on
page 63.
ODBC
To define an ODBC datastore connection, you need to define a data source, a
user name, a password if applicable, and optionally a set of advanced
options.
Selecting an ODBC data source
Y ou can select a data source in one o f three ways. In the
the ODBC datastore editor:
Data source field of
•From the drop-down list, click an existing data source, or
•Type the name of a data source, or
Data Integrator Reference Guide73
Data Integrator Objects
2
Descriptions of objects
•Click ODBC Admin to launch the Windows ODBC Data Source
Administrator where you create or configure data sources. After closing
the ODBC Data Source Administrator, you can select a newly created
data source from the datastore editor’s drop-down list.
To configure data sources in UNIX, see “ODBC driver manager for UNIX”
on page 79.
Defining ODBC datastore options
T o define options for an ODBC datastore, click
configure, you can select a value from its drop-down list, or many options
allow you to type a custom value.
Most ODBC datastore options include the following values.
Automatic
When you create a new ODBC datastore, most options default to
With this setting, if you do not know if the ODBC driver supports an option,
Data Integrator queries the driver to determine its capabilities. If the driver
supports that option, Data Integrator pushes down the operation to the ODBC
database. If the ODBC driver does not support that option, Data Integrator
executes the operation internally.
In some cases, you might need to specify an option other than
circumvent possible inconsistencies with the ODBC driver. If you select
anything other than
that particular capability. Most options in the ODBC datastore editor provide
some or all of the following choices.
ODBC syntax
Data Integrator assumes the ODBC driver supports the function/capability
and uses ODBC syntax.
For example, for the ABSOLUTE function, the syntax would be:
{fn abs (TAB1.COL1)}
SQL-92
Data Integrator assumes the ODBC driver supports the function/capability
and uses SQL-92 syntax.
For example, when Data Integrator generates an explicit CONVERT function,
the syntax would be:
CAST (TAB1.VC_COL AS SQL_INTEGER)
No
Data Integrator assumes the ODBC driver does not support the function/
capability and executes it internally.
Automatic, Data Integrator does not query the driver for
Advanced. For each option to
Automatic.
Automatic to
74Data Integrator Reference Guide
Custom
Many functions allow you to type in the specific function call to use for that
option. Data Integrator assumes the ODBC driver supports the function/
capability.
Note: You cannot specify the signature of the function; it will be the same as
in the ODBC signature.
For example, for the string function
ucase(...)}
Integrator will generate:
upper(TAB1.VC_COL)
The following table describes all of the fields and options in the ODBC
datastore editor.
Table 2-12 ODBC
ODBC optionPossible valuesDescription
Main window
Data sourceRefer to the
requirements of
your database
User nameAlphanumeric
characters and
underscores
PasswordAlphanumeric
characters,
underscores, and
punctuation
ODBC Admin
button
Connection
Additional
connection
information
Alphanumeric
characters and
underscores, or
blank
, you can type in the Upper case option field upper. Data
Select or type the Data Source Name defined in the
ODBC Administrator for connecting to your database.
Enter the user name of the account through which Data
Integrator accesses the database.
Enter the user’s password.
Click to launch the Windows ODBC Data Source
Administrator where you create or configure data
sources. After closing the ODBC Data Source
Administrator, you can select a newly created data
source from the datastore editor’s drop-down list.
Enter information for any additional parameters that the
data source supports (parameters that the data
source’s ODBC driver and database support). Use the
format:
<parameter1=value1; parameter2=value2>
Data Integrator Objects
Descriptions of objects
Upper case, instead of using {fn
2
Data Integrator Reference Guide75
Data Integrator Objects
2
Descriptions of objects
ODBC optionPossible valuesDescription
General
Rows per
commit
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Capability
Support
Array fetchAutomatic, NoIf you encounter errors when reading from an ODBC
Positive integer Enter the maximum number of rows loaded to a target
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Directory path or
click Browse
Enter the location of overflow files written by target
tables in this datastore.
datastore, especially if the error message involves the
ODBC call SQLFetchScroll, it is safe to assume that
your ODBC driver does not support array fetch. In this
case, select the No value to turn off Data Integrator’s
array fetch capability . Data Integrator fetches one row at
a time from the
ODBC data source. The No value
causes all Source Table Editors and SQL Transform
Editors that use this ODBC datastore to not display
the Array fetch size performance option
Parameterized
SQL
Outer joinAutomatic,
Auto commitAutomatic, Yes, NoDetermines whether the ODBC driver supports auto
Automatic, NoBy using parameterized SQL, Data Integrator
generates SQL statements with parameters instead of
literal values, which can significantly improve
performance.
Determines whether the ODBC driver supports outer
ODBC syntax,
SQL-92 syntax,
No
join syntax.
commit.
.
76Data Integrator Reference Guide
ODBC optionPossible valuesDescription
Math Function Support
AbsoluteAutomatic, ODBC
syntax, No,
custom
CeilingAutomatic, ODBC
syntax, No,
custom
FloorAutomatic, ODBC
syntax, No,
custom
RoundAutomatic, ODBC
syntax, No,
custom
TruncateAutomatic, ODBC
syntax, No,
custom
String Function Support
Lower caseAutomatic, ODBC
syntax, No,
custom
Upper caseAutomatic, ODBC
syntax, No,
custom
Rtrim blanksAutomatic, ODBC
syntax, No,
custom
Ltrim blanksAutomatic, ODBC
syntax, No,
custom
LengthAutomatic, ODBC
syntax, No,
custom
SubstringAutomatic, ODBC
syntax, No,
custom
Returns the absolute value of an input number.
Returns the smallest integer value greater than or equal
to an input number.
Returns the largest integer value less than or equal to
an input number.
Rounds a given number to the specified precision.
Truncates a given number to the specified precision.
Changes the characters in a string to lowercase.
Changes the characters in a string to uppercase.
Removes blank characters from the end of a string.
Removes blank characters from the start of a string.
Returns the number of characters in a given string.
Returns a specific portion of a string starting at a given
point in the string.
Data Integrator Objects
Descriptions of objects
2
Data Integrator Reference Guide77
Data Integrator Objects
2
Descriptions of objects
ODBC optionPossible valuesDescription
Date Function Support
System dateAutomatic, ODBC
syntax, No,
custom
System timeAutomatic, ODBC
syntax, No,
custom
WeekAutomatic, ODBC
syntax, No,
custom
MonthAutomatic, ODBC
syntax, No,
custom
QuarterAutomatic, ODBC
syntax, No,
custom
YearAutomatic, ODBC
syntax, No,
custom
Day of monthAutomatic, ODBC
syntax, No,
custom
Day of yearAutomatic, ODBC
syntax, No,
custom
Aggregate Function Support
AverageAutomatic,
SQL-92 syntax,
No
CountAutomatic,
SQL-92 syntax,
No
MaxAutomatic,
SQL-92 syntax,
No
MinAutomatic,
SQL-92 syntax,
No
Returns the current date as listed by the Job Server’s
operating system.
Returns the current time as listed by the operating
system.
Determines the week in the year in which the given date
falls.
Determines the month in which the given date falls.
Determines the quarter in which the given date falls.
Determines the year in which the given date falls.
Determines the day in the month on which the given
date falls.
Determines the day in the year on which the given date
falls.
Calculates the average of a given set of values.
Counts the number of values in a table column.
Returns the maximum value from a list.
Returns the minimum value from a list.
78Data Integrator Reference Guide
ODBC optionPossible valuesDescription
SumAutomatic,
SQL-92 syntax,
No
Miscellaneous
Date format
Time format
Date-time
format
Decimal
separator
Data type
conversion
support
NVL supportAutomatic, ODBC
Ifthenelse
support
yyyy.mm.dd
or other
combinations
hh24:mi:ss
or other
combinations
yyyy.mm.dd
hh24:mi:ss
or other
combinations
. , Enter the character that the data source uses to
Automatic, ODBC
syntax, No,
SQL-92 syntax
syntax, No,
custom
Yes, NoAllows conditional logic in mapping and selection
Calculates the sum of a given set of values.
Enter a date format supported by the data source (a
date format that the data source’s ODBC driver and
database supports).
Enter a time format supported by the data source (a
time format that the data source’s ODBC driver and
database supports).
Enter a date-time format supported by the data source
(a date-time format that the data source’s ODBC driver
and database supports).
separate the decimal portion of a number.
When there’s a data type mismatch in an expression,
Data Integrator automatically generates an explicit
convert function call.
If the input values is NULL, replace with the specified
value.
operations.
Data Integrator Objects
Descriptions of objects
2
ODBC driver manager for UNIX
To take advantage of Data Integrator’s built-in driver manager for UNIX, you
add data source entries in Data Integrator’s odbc.ini file. The Data Integrator
installation also provides a sample file called odbc.ini.sample that provides a
template that you can save as odbc.ini.
The UNIX ODBC driver manager provides the option to load any ODBC driver
library other than DataDirect’s library when the ODBC driver satisfies
following conditions:
•The ODBC driver is at least ODBC 2.0 compliant
•The ODBC driver library is thread-safe
Data Integrator Reference Guide79
Data Integrator Objects
2
Descriptions of objects
UNIX ODBC driver manager configuration file
Similar to the way other driver managers on UNIX define an ODBC data
source, Data Integrator provides an odbc.ini file. Add each ODBC data source
in the Data Integrator datastore(s) to the configuration file in $LINK_DIR/bin/
odbc.ini. The following table lists the data source configuration parameters
odbc.ini (and odbc.ini.sample):
KeyRequired? Valid valueExample
DriverYesA full path including the ODBC driver
library name. The directory containing
the dependent libraries must be in the
shared library path (for AIX, LIBPATH;
for Solaris or Linux,
LD_LIBRARY_PATH; for HP-UX,
SHLIB_PATH). Check vendor
documentation for what you need to add
to the shared library path.
OdbcCon
formance
Level
Lazy
Loading
NoA decimal value specifying the ODBC
conformance level of driver. Default
value is 0, in which case the driver
detects by loading 2.x followed by 3.x
functions from the driver. When any
value greater than or equal to 4.0 is
specified, the driver manager prints a
run time error.
NoYou can specify a Boolean TRUE/YES
or FALSE/NO. Default value is
FALSE.The UNIX ODBC Driver
Manager loads the ODBC driver and
instructs the operating system to load all
of its dependent libraries. This flag is
useful when certain dependent libraries
of the ODBC driver are not required and
the ODBC vendor recommends to load
the library in lazy mode.
Driver=/home/mysql/
myodbc/lib/libmyodbc3_r.so
OdbcConformanceLevel=0
OdbcConformanceLevel=3.0
LazyLoading=TRUE
In addition to editing the data source in $LINK_DIR/bin/odbc.ini, follow the
instructions provided by your ODBC driver vendor for UNIX, which usually
include:
•Adding certain directory locations to the shared library path
•Exporting the ODBCINI environment variable to point to the vendor's
odbc.ini file
•Adding the data source to the vendor’s odbc.ini file
A sample entry in the $LINK_DIR/bin/odbc.ini file might be:
All *.ini files have the same syntax requirements. Enclose data source names
in square brackets. Properties follow on subsequent lines and use
PropertyName = PropertyValue. For example:
In this example, test_mysql is the name of data source that can be loaded
using libmyodbc3_r.so library file. Default values apply when optional
properties are left blank.
Follow these guidelines when editing the $LINK_DIR/bin/odbc.ini file:
•Each data source name must at least have a driver property defined,
which allows the driver manager to load the driver when connecting to the
database.
•The pound sign (#) as the first character in any line denotes a comment.
•All leading blanks and trailing blanks in data source names and
properties are ignored.
ODBC driver compliance level detection
An ODBC driver can be compliant to either 2.x or 3.x or both. The UNIX
ODBC driver manager detects if the driver is 2.x or 3.x and loads the
respective compatible ODBC API functions. In the case when the driver is
both 2.x and 3.x compliant, then the driver manager only loads the 2.x ODBC
API. However, you can override this behavior by specifying for 3.0 for the
OdbcConformanceLevel parameter in theodbc.ini file. AS a result, the ODBC
driver manager only loads 3.x ODBC API functions.
Upgrading and migration
Customers migrating from earlier versions of Data Integrator (prior to 11.x)
can do one of the following methods to retain the original behavior:
When using DataDirect ODBC:
2
Data Integrator Reference Guide81
Data Integrator Objects
2
Descriptions of objects
1.Overwrite the existing installation so that the UNIX installer takes care of
setting UseDIUNIXODBCDriverManager to FALSE in DSConfig.txt so
that DataDirect’s driver manager loads by default for every data source
configured.
OR
2.If you are doing a fresh installation, then manually set
UseDIUNIXODBCDriverManager to FALSE in DSConfig.txt in the
[AL_Engine] section.
OR
3.There is no need to use DataDirect ODBC Driver Manager . Simply add all
the data sources used in existing DI datastores to $LINK_DIR/bin/
odbc.ini file.
When using the UNIX_DRIVER_MANAGER_LIB property:
1.Overwrite the existing installation to retain
UNIX_DRIVER_MANAGER_LIB property in DSConfig.txt.
OR
2.If you are doing a fresh installation, then copy the
UNIX_DRIVER_MANAGER_LIB property from DSConfig.txt in old
installation. This setting lets Data Integrator load the library property for
every ODBC data source.
OR
3.Add all the data sources used in existing DI datastores to the $LINK_DIR/
bin/odbc.ini file.
Working with data sources used in a datastore
When UNIX_ODBC_DRIVER_MANAGER_LIB is specified in DSConfig.txt
Data Integrator assumes the user wants to use a third-party ODBC driver
manager and automatically disables its ODBC driver manager. Then for every
data source name mentioned in an ODBC datastore, Data Integrator loads
the library named for the UNIX_ODBC_DRIVER_MANAGER_LIB property.
If the option UseDIUNIXODBCDriverManager is FALSE, then Data Integrator
assumes the user wants to use DataDirect as ODBC driver manager. Then
for every data source name mentioned in ODBC datastore, Data Integrator
loads the DataDirect driver manager library.
If UseDIUNIXODBCDriverManager is TRUE, then Data Integrator searches
$LINK_DIR/bin/odbc.ini file and loads the library mentioned in driver property.
Examples for configuring the ODBC driver
The following examples apply to Linux.
,
82Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
You can often combine the configuration files (odbc.ini) of different ODBC
drivers into one single file and point to this file in ODBCINI environment
variable. Verify this functionality by referring to your ODBC vendor’s
documentation.
To configure MYSQL ODBC on Linux
1.Add the data source to the Data Integrator UNIX ODBC driver manager
configuration file ($LINK_DIR/bin/odbc.ini). For example:
1.Add the data source to the MyODBC driver configuration file as:
[test_mysql]
Driver = /home/mysql/myodbc/lib/libmyodbc3_r.so
SERVER = mysql_host
PORT = 3306
USER = test
Password = test
Database = test
OPTION = 3
SOCKET =
2.Add the following environment settings to .profile
Enable CDCSelect to enable changed data capture for this datastore.
General
Rows per
commit
Bulk loader
directory
Oracle 8.0
Oracle 8.1
Oracle 9i
Oracle 10g
Refer to the
requirements of
your database
characters and
underscores
characters,
underscores,
and punctuation
Positive integer Enter the maximum number of rows loaded to a target
Directory path or
click Browse
Select the version of your Oracle client. This is the
version of Oracle that this datastore accesses.
Enter an existing Oracle connection through which Data
Integrator accesses sources and targets defined in this
datastore.
Enter the user name of the account through which Data
Integrator accesses the database.
Enter the user’s password.
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Enter the location where command and data files are
written for bulk loading. For Solaris systems, the path
name must be less than 80 characters.
84Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Oracle optionPossible values Description
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Linked Datastores (Click here to create)
Datastore Name Alphanumeric
Directory path or
click Browse
characters and
underscores or
blank
Table 2-14 Sybase ASE
Enter the location of overflow files written by target tables
in this datastore.
The name of a datastore to which you linked the current
datastore configuration in preparation to import a
database link. See “Importing database links” on
page 63.
2
Sybase ASE
option
Main window
Database
version
Database
server name
Database name Refer to the
User nameAlphanumeric
PasswordAlphanumeric
Possible values Description
Sybase ASE
11.x
Sybase ASE
12.x
Computer name Enter the name of the computer where the Sybase ASE
requirements of
your database
characters and
underscores
characters,
underscores,
and punctuation
Select the version of your Sybase ASE client. This is the
version of Sybase that this datastore accesses.
instance is located.
Enter the name of the database to which the datastore
connects.
Enter the user name of the account through which Data
Integrator accesses the database.
Enter the user’s password.
Data Integrator Reference Guide85
Data Integrator Objects
2
Descriptions of objects
Sybase ASE
option
General
Rows per
commit
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Sybase IQ optionPossible
Main window
Database versionSybase IQ 12.5
Data sourceRefer to the
User nameAlphanumeric
PasswordAlphanumeric
General
Rows per commitPositive integer Enter the maximum number of rows loaded to a
Bulk loader directory Directory path
Possible values Description
Positive integer Enter the maximum number of rows loaded to a target
table before saving the data. This value is the default
commit size for target tables in this datastore. You can
overwrite this value for individual target tables.
Directory path or
click Browse
Table 2-15 Sybase IQ
values
Sybase IQ 12.6
requirements of
your database
characters and
underscores
characters,
underscores,
and punctuation
or click Browse
Enter the location of overflow files written by target tables
in this datastore.
Description
Select the version of your Sybase IQ client. This is
the version of Sybase IQ that this datastore
accesses.
Select or type the Data Source Name defined in the
ODBC Administrator for connecting to your database.
Enter the user name of the account through which
Data Integrator accesses the database.
Enter the user’s password.
target table before saving the data. This value is the
default commit size for target tables in this datastore.
You can overwrite this value for individual target
tables.
Enter the location where command and data files are
written for bulk loading. For Solaris systems, the path
name must be less than 80 characters.
86Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
2
Sybase IQ optionPossible
values
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Bulk loader
JS and DB on same
machine
Use named pipe
Directory path
or click Browse
Yes, NoIf the Job Server and database server are not on the
Yes, No
Description
Enter the location of overflow files written by target
tables in this datastore.
same computer, you must configure Data Integrator
to transfer via FTP the data file generated on the Job
Server to the database server. Therefore, the
performance of bulk loader is significantly better if the
Job Server and the database are on the same
machine.
Applies to Sybase IQ database version 12.6 only.
If the Job Server and database server are on same
computer, select Yes to eliminate the need to write a
data file to disk, which can improve performance.
If a data file is required for Sybase IQ database
recovery, select No.
Defaults to No.
Data Integrator Reference Guide87
Data Integrator Objects
2
Descriptions of objects
Sybase IQ optionPossible
values
FTP
FTP host nameComputer
name, fully
qualified
domain name,
or IP address
FTP login user name Alphanumeric
characters and
underscores, or
blank
FTP login passwordAlphanumeric
characters,
underscores,
and
punctuation, or
blank
FTP host working
directory
Absolute file
path
Description
If the Job Server and Sybase IQ database server are
not on the same machine, Data Integrator generates
a data file and transfers it via FTP to the database
machine for loading. You set FTP parameters in the
Sybase IQ datastore editor.
Type the name of the Sybase IQ server computer
(host). If left blank and Data Integrator needs this
FTP information for bulk loading, it generates a
validation error.
Must be defined to use FTP.
Must be defined to use FTP.
The location on the database server to where Data
Integrator transfers the data file.
Note: Configure the FTP server to accept an
absolute path
Table 2-16 Teradata
Teradata
option
Main window
Database
version
Data sourceRefer to the
User nameAlphanumeric
88Data Integrator Reference Guide
Possible values Description
Teradata 2.5Select the version of your Teradata client. This is the
requirements of
your database
characters and
underscores
version of Teradata that this datastore accesses.
Type the Data Source Name defined in the ODBC
Administrator for connecting to your database.
Enter the user name of the account through which Data
Integrator accesses the database.
Data Integrator Objects
Descriptions of objects
2
Teradata
option
PasswordAlphanumeric
General
Bulk loader
directory
Overflow file
directory
Locale
LanguageSee Chapter 9: Locales and Multi-Byte Functionality.
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Teradata
Log directoryDirectory path or
TdpldAlphanumeric
Possible values Description
Enter the user’s password.
characters,
underscores,
and punctuation
Directory path or
click Browse
Directory path or
click Browse
click Browse
characters,
underscores,
and punctuation
Enter the location where command and data files are
written for bulk loading. For Solaris systems, the path
name must be less than 80 characters.
Enter the location of overflow files written by target tables
in this datastore.
The directory in which to write log files.
The password Data Integrator uses when loading with
the bulk loader option.
Application datastores
The information you must enter for a datastore depends on the type of
datastore to which you are connecting. Application datastore types include:
•JDE OneWorld
•JDE World
•Oracle Applications
•PeopleSoft
•R/3
•SAP BW Source
•SAP BW Target
•Siebel
After you create a datastore, you can import metadat a about the objects, such
as tables and functions, into that datastore. See Chapter 5, “Datastores,” in
the Data Integrator Designer Guide.
Data Integrator Reference Guide89
Data Integrator Objects
2
Descriptions of objects
JDE OneWorld
Datastore configuration options for this datastore type vary depending on
which database type you select.
The following table lists the options specific to JDE OneWorld (some might
not be available with every database).
Table 2-17 JD Edwards
JD Edwards
option
EnvironmentRefer to the
System data
source
System data
source owner
Object librarian
data source
Local data
source
Data dictionary
data source
Possible values Description
requirements of
the application
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Type the J.D. Edwards application environment name.
Type the name of the database where the tables
F986101, F98611, and F00941 are located.
This option is available for DB2 and Microsoft SQL
Server databases.
Type the owner ID for the system data source.
Type the name of the database where the tables F9860
and F9861 are located.
This option is available for DB2 and Microsoft SQL
Server databases.
Type the name of the dat abase where the table F0005 is
located.
This option is available for DB2 and Microsoft SQL
Server databases.
Type the name of the dat abase where the table F9203 is
located.
This option is available for DB2 and Microsoft SQL
Server databases.
The JDE OneWorld datastore type works with the following database types.
Refer to the specific option tables for each database.
•“DB2” on page 68
•“Microsoft SQL Server” on page 72
•“ODBC” on page 75
•“Oracle” on page 84
For more information, see “Datastores” on page 8 of the Data Integrator Supplement for J.D. Edwards.
90Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
JDE World
Datastore configuration options for this datastore type vary depending on
which database type you select; however, currently only the ODBC database
type is available.
For the options specific to JD Edwards, see “JD Edwards” on page 90.
For the options specific to ODBC, see “ODBC” on page 75.
For more information, see “Datastores” on page 8 of the Data Integrator
Supplement for J.D. Edwards.
Oracle Applications
The following table lists the options specific to Oracle Applications.
Table 2-18 Oracle Applications
2
Oracle
Applications
option
Process
flexfield names
Schema ownerRefer to the
Possible values Description
Yes, NoDetermines whether Data Integrator should use
requirements for
your application
For the remaining options, see “Oracle” on page 84.
For more information, see “Datastores” on page 7 of the Data Integrator
Supplement for Oracle Applications.
PeopleSoft
Datastore configuration options for this datastore type vary depending on
which database type you select.
For the options specific to Microsoft SQL Server, see “Microsoft SQL Server”
on page 72.
For the options specific to Oracle, see “Oracle” on page 84.
For more information, see “Datastore” on page 35 of the Data Integrator
Supplement for PeopleSoft.
R/3
The following table lists the options for R/3.
descriptive names for flexfield columns based on the
flexfield information. If set to No, Data Integrator uses
database column names for all columns including those
that belong to flexfields.
Type the schema owner name for foundation tables (the
name is usually
apps).
Data Integrator Reference Guide91
Data Integrator Objects
2
Descriptions of objects
Table 2-19 R/3
R/3 optionPossible values Description
Main window
R/3 application
server
User nameAlphanumeric
PasswordAlphanumeric
Locale
R/3 languageE - English
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Computer name,
fully qualified
domain name, or
IP address
characters and
underscores
characters and
underscores, or
blank
G - German
F - French
J - Japanese
Generate and
Execute
Execute
Preloaded
00-99The two-digit R/3 system number.
Name of the remote SAP R/3 application computer (host)
to which Data Integrator connects (if using as an R/3
datastore).
Enter the name of the account through which Data
Integrator accesses the SAP R/3 application server.
Enter the user’s password.
Select the login language from the drop-down list.
See Chapter 9: Locales and Multi-Byte Functionality.
Select the job execution strategy:
• Generate and Execute — ABAP resides on the Data
Integrator server and is submitted to R/3 using the SAP
R/3 RFC_ABAP_INSTALL_AND_RUN function.
• Execute Preloaded — ABAP resides on the SAP R/3
application server and is submitted to R/3 using Data
Integrator RFC function modules.
If the Data Integrator job changes between scheduled
executions, choose Generate and Execute. If the job
does not change, choose Execute Preloaded. Your
choice affects the required authorizations. See
“Authorizations for Data Integrator” on page 40 of the
Data Integrator Supplement for SAP.
Defaults to 800.
Defaults to 00.
92Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
R/3 optionPossible values Description
Execute in
background
(batch)
Target hostComputer name,
Job classA, B, CSpecify the job class if you choose to execute ABAP
Data transfer
method
Yes, NoSpecify that the generated ABAP programs created by
R/3 data flows defined with this datastore will execute in
batch mode on the SAP R/3 application server. Batch
mode operation is slower than the normal console mode;
however, choose batch mode if the application is too long
to run during the console mode time window.
Defaults to No.
Specify the target computer (host) if you choose to
fully qualified
domain name, or
IP address
Direct download
Shared directory
FTP
Custom transfer
execute ABAP programs in background.
programs in background.
Define how to get data from the SAP R/3 server to the
Data Integrator server:
•Direct download — Use SAP R/3 WS_DOWNLOAD
function
•Shared directory — Use NFS (shares one network
drive or directory)
•FTP — Use FTP
•Custom transfer — Use a third-party program
These methods are described in Chapter 8, “Executing
Batch Jobs that Contain R/3 Data Flows,” in the Data
Integrator Supplement for SAP.
When you select a data transfer method, the appropriate
options for that method appear below the option.
Working
directory on
SAP server
Data Integrator
path to the
shared directory
Local directoryDirectory path or
Generated
ABAP directory
Directory path or
click Browse
Directory path or
click Browse
click Browse
Directory path or
click Browse
Indicate where ABAP intermediate file, R/3 source, and
R/3 target are written. All the files used by the R/3 data
flow should be placed in this directory . This directory also
stores the transport file used by the FTP, shared-
directory, and custom transfer data transfer methods.
If you selected the Shared directory data transfer
method, Indicate the path from the Data Integrator server
to the SAP R/3 server’s working directory.
If you selected the Direct download or FTP data transfer
method, select a client-side directory to which data from
the R/3 server downloads.
Indicate the directory into which generated ABAP files
are written. Can be the same directory as the local/direct
download directory.
2
Data Integrator Reference Guide93
Data Integrator Objects
2
Descriptions of objects
R/3 optionPossible values Description
R/3 security
profile
Number of
connection
retries
Interval
between retries
(sec)
Custom
transfer
Custom transfer
local directory
Custom transfer
program
Custom transfer
user name
Custom transfer
password
Custom transfer
arguments
FTPThese options are visible if you selected the FTP data transfer method.
FTP relative
path to the SAP
working
directory
FTP host name Computer (host)
Refer to the
requirements of
the application
Positive integerThe number of times Data Integrator tries to establish a
Positive integerThe time delay in seconds between connection retries.
These options are visible if you selected the Custom transfer data transfer
method.
Directory pathThe client-side directory to which data from the R/3
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Refer to the
requirements of
the application
Directory path or
click Browse
name, fully
qualified domain
name, or IP
address
Specify the security profile you want to use in the
generated ABAP program. The user of the Data
Integrator R/3 datastore must have the required profile.
connection with the R/3 application server.
Defaults to 3.
Defaults to 10.
server downloads.
The name of the third-party file transfer program you
want to use to transfer files from the SAP working
directory to the local directory.
(optional) Login ID for the SAP R/3 server to which the
custom transfer program connects.
(optional) Password for the SAP R/3 server to which the
custom transfer program connects. Passwords entered
into this option are encrypted by Data Integrator.
(optional) Specify arguments for your custom transfer
program. Arguments can add security or compression
mechanisms to your program or include Data Integrator
system variables. See “Data Integrator system variables
for transferring a file from SAP R/3” on page 172 for more
information.
Indicate the path from the FTP root directory to the SAP
R/3 server’s working directory. When you select FTP, this
directory is required.
Must be defined to use FTP.
94Data Integrator Reference Guide
R/3 optionPossible values Description
FTP user name Alphanumeric
characters and
underscores
FTP passwordAlphanumeric
characters and
underscores, or
blank
For more information, see “Datastore” on page 204 of the Data Integrator Supplement for SAP.
SAP BW Source
The SAP BW Source datastore type has he same options as the R/3
datastore type. See “R/3” on page 92.
For more information, see “Datastore” on page 204 of the Data Integrator Supplement for SAP.
SAP BW Target
The following table lists the options for SAP BW Target.
Table 2-20 SAP BW Target
Must be defined to use FTP.
Enter the FTP password.
Data Integrator Objects
Descriptions of objects
2
SAP BW
Target option
Main window
R/3 application
server
User nameAlphanumeric
PasswordAlphanumeric
Possible values Description
Computer name,
fully qualified
domain name, or
IP address
characters and
underscores
characters and
underscores, or
blank
Type the name of the remote SAP R/3 application
computer (host) to which Data Integrator connects (if
using as an R/3 datastore).
Enter the user name of the account through which Data
Integrator accesses the database.
Enter the user’s password.
Data Integrator Reference Guide95
Data Integrator Objects
2
Descriptions of objects
SAP BW
Target option
Locale
R/3 languageE - English
Code pageSee Chapter 9: Locales and Multi-Byte Functionality.
Select the login language from the drop-down list.
G - German
F - French
J - Japanese
00-99The two-digit R/3 system number.
requirements of
the application
Siebel
The Siebel datastore type works with the following database types:
See Chapter 9: Locales and Multi-Byte Functionality.
Defaults to 800.
Defaults to 00.
The SAP routing string.
•“DB2” on page 68
•“Microsoft SQL Server” on page 72
•“Oracle” on page 84
For more information, see “Datastores” on page 9 of the Data Integrator Supplement for Siebel.
96Data Integrator Reference Guide
Data Integrator Objects
Descriptions of objects
Document
Class
Reusable
Access
In the object library, click the Datastores tab.
Description
Available in some adapter datastores, documents describe a data schema.
Documents can support complicated nested schemas. You can use
documents as sources or targets.
See your adapter’s documentation for more specific information about the
options available for documents.
2
Data Integrator Reference Guide97
Data Integrator Objects
2
Descriptions of objects
DTD
Class
Reusable
Access
In the object library, click the Formats tab, then open the DTD category.
Description
A DTD (document type definition) describes the data schema of an XML
message or file.
Note: XML Schemas can be used for the same purpose. See “XML Schema”
on page 192.
Data flows can read and write data to messages or files based on a specified
DTD format.You can use the same DTD to describe multiple XML sources or
targets.
To use DTDs, import metadata into Data Integrator. You can import a DTD
directly, or you can import an XML document that contains or references a
DTD. During import, Data Integrator converts the structure defined in the DTD
into the Data Integrator nested-relational data model (NRDM). See “Rules for
importing DTDs” on page 104.
Editor
Open the DTD editor by double-clicking a DTD name in the object library.
For import procedures, see “Using Document Type Definitions (DTDs)” on
page 221 of the Data Integrator Designer Guide.
Properties
DTDs have the following properties.
PropertyDescription
NameThe name of the format. This name appears in the
object library under the Formats tab and is used for
sources and targets (XML files or messages) that
reference this format in data flows.
DescriptionText that you enter to describe and document the DTD.
Data Integrator Reference Guide99
Data Integrator Objects
2
Descriptions of objects
PropertyDescription
Imported fromThe full path to the format. For example,
C:\data\test.dtd
DTD file(Read-only) If the check box is selected, the DTD format
was originally imported from a DTD file. Otherwise, it
with an associated DTD.
Root element
name
was imported from an XML file
The name of the primary node of the XML that the DTD
is defining. Data Integrator only imports elements of the
format that belong to this node or any sub nodes.
Attributes
Data Integrator supports the following column attributes for DTDs.
Table 2-21 Column attributes supported for DTDs
AttributeDescription
EnumerationContains a list of all possible values separated by
vertical bars. For example: “Red | White | Blue Green |
Magenta”. A string display is cut off at 256 characters.
Fixed ValueThe only value the column can have.
Native TypeString. The original data type of the of the element or
attribute in the DTD.
RequiredIndicates whether this column always has to be mapped
(YES/NO).
If a column is optional (required =no), then validation
will allow mapping expressions to be missing for these
columns and at runtime the engine will substitute
NULLs for the missing values.
XML TypeAllows you to track whether the column was an element
or attributes in the original DTD.
100 Data Integrator Reference Guide
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.