Business objects DATA INTEGRATOR 11.0.2.5 RELEASE NOTES 10-2005 User Manual

Data Integrator Release Notes
Data Integrator 11.0.2.5
Windows and UNIX
Patents
Business Objects owns the following U.S. patents, which may cover products that are offered and sold by Business Objects: 5,555,403, 6,247,008 B1, 6,578,027 B2, 6,490,593 and 6,289,352.
Trademarks
Copyright
Copyright © 2005 Business Objects. All rights reserved.
October 5, 2005

Introduction

BusinessObjects Data Integrator version 11.0.2.5 is a maintenance release of Data Integrator version 11.0. This document contains important information about this product release including installation notes, details regarding known issues with this release, and important information for existing customers. Please read the entire document before installing your Business Objects software.

For the latest documentation

Be sure to visit the Business Objects documentation Web site at http://
support.businessobjects.com/documentation/ for additional notes and
information that could become available following the release of this document.
For information on Data Integrator documentation locations and availability in your language, see the BusinessObjects Data Integrator XI Product Document Map in the Doc\Books directory on your CD or within the installation of Data Integrator on your computer.
If a guide you want to view was not translated to your language at release time, it could be available online and you can download it from http://
support.businessobjects.com/documentation/. If the document is not yet
available in your language, you can access the English language version online or from the Doc\Books\EN directory on your CD or within the installation of Data Integrator on your computer.
To review Data Integrator documentation for previous releases (including Release Summaries and Release Notes), visit the Business Objects Customer Support page at http://www.techsupport.businessobjects.com/ (requires registration), select the version from the top menus, and click Documentation. Or, consult the Data Integrator Technical Manuals document included with that release of the product.
Introduction

In this document

These release notes contain the following sections:
Compatibility and limitations update
Software requirements for UNIX
Resolved issues
Known issues
Migration considerations
Documentation corrections
Data Integrator Release Not es 1

Compatibility and limitations update

Business Objects information resources
Copyright information
Compatibility and limitations update
This section describes changes in compatibility between Data Integrator and other applications. It also summarizes limitations associated with this version of Data Integrator.

Compatibility update

For complete compatibility and availability details, see the Product Availability Report (PAR) on the Business Objects Customer Support web site (requires registration):
http://www.techsupport.businessobjects.com/par/Data_Integration_XI/ DI_XI_Include.htm

Data Integrator version 11.0.2.5 supports

HP-UX Itanium 64-bit
Running on HP-UX 11i v2 for Integrity servers with the following
(required) operating system patches:
PHCO_30531 or above (libc cumulative patch)
PHCO_30543 or above (pthread library cumulative patch)
The following configurations are supported in this release:
Oracle (repository, source, target)
Flat files
XML
SAP R/3
Web services
Real Time.
Note: Additional RDBMS and technology interfaces are not
supported in this release.
Designer on a Japanese operating system will work with a Job Server on
a Japanese Windows computer.
2 Data Integrator Release Notes
Compatibility and limitations update
When using the Designer on a Japanese operating system and working with either an English Job Server on UNIX or a Job Server on Windows on a non-Japanese operating system:
Messages from the back end display in English
Business Objects recommends using a Japanese Administrator on
Windows so that Metadata Reports and Impact Analysis correctly display object descriptions in Japanese. If you use a Japanese Administrator on UNIX, “junk” characters display if any of the object descriptions have Japanese characters.
Solaris 10
NCR Teradata V2R6 with TTU 8.0 and NCR Teradata ODBC 3.0.4
The following TTU patches are required:
On Windows platforms:
cliv2.04.08.00.01.exe
tbld5000.05.00.00.01.exe
psel5000.05.00.00.01.exe
pdtc5000.05.00.00.01.exe
npaxsmod.01.03.01.00.exe
npaxsmod.01.03.00.02.exe
npaxsmod.01.03.00.01.exe
On all UNIX platforms:
cliv2.04.08.00.01.exe
tbld5000.05.00.00.01.exe
The following Teradata ODBC patch version is required on AIX:
03.04.00.02
NCR Teradata V2R5 and V2R6 continue to be supported. The versions of TTU client now supported include:
Teradata server version Client version
V2R6 TTU 8.0 V2R5.1 TTU 7.1 V2R5 TTU 7.0
With Teradata V2R6, the named pipes mechanism for Teradata bulk loader is now supported.
Data Integrator Release Not es 3
Compatibility and limitations update
For more information on using named pipes for Teradata, please see
“Documentation corrections” on page 38.
Note that Teradata does not recommend using Warehouse Builder in TTU 7.0 and 7.1, and there are some issues with Data Integrator on the AIX platform with multiple pipes.
IBM AIX 5.3
Red Hat Linux AS 3.0
Sybase IQ version 12.5 with Sybase IQ ODBC 7.00.04 and Sybase IQ
version 12.6 with Sybase IQ ODBC 9.00.1. Note that Sybase IQ version
12.6 requires ESD 4. With Sybase IQ 12.6, the named pipes mechanism for Sybase IQ bulk
loader is now supported.
Attunity for mainframe sources using Attunity Connect version 4.6.1. For
updates on support for later versions of Attunity, please check the Product Availability Report (PAR) on the Business Objects Customer Support web site (requires registration):
http://www.techsupport.businessobjects.com/par/Data_Integration_XI/ DI_XI_Include.htm
DB2 UDB for Windows and UNIX version 8.2
DB2 UDB for iSeries version 5.2
ODBC to generic database support has been validated with the following
database servers and ODBC drivers. For complete compatibility details, see the Product Availability Report (PAR) on the Business Objects Customer Support web site on the Supported Platforms page (http://www.techsupport.businessobjects.com/par/Data_Integration_XI/
DI_XI_Include.htm).
Microsoft SQL Server 2000 via DataDirect Connect for ODBC 5.0 or
5.1. Note: Version 5.1 is required on Linux.
MySQL version 4.1 via ODBC driver version 3.51.10
Red Brick version 5.6.2 via ODBC driver version IBM 5.62.00.10
SQLAnywhere version 9.0.1 via ODBC driver version Adaptive
Server Anywhere 9.0
Log-based changed-data capture (CDC) works with the following
database versions
Oracle version 9.2 and above compatible versions
Microsoft SQL Server 2000
4 Data Integrator Release Notes
Compatibility and limitations update
IBM DB2 UDB for Windows version 8.2 using DB2 Information
Integrator for Replication Edition version 8.2 (DB2 II Replication) and IBM WebSphere Message Queue version 5.3.
IBM DB2 UDB for z/OS using DB2 Information Integrator Event
Publisher for DB2 UDB for z/OS and IBM WebSphere Message Queue version 5.3.1
IBM IMS/DB using DB2 Information Integrator Classic Event Publisher
for IMS and IBM WebSphere Message Queue version 5.3.1
IBM VSAM under CICS using DB2 Information Integrator Classic
Event Publisher for VSAM and IBM WebSphere Message Queue version 5.3.1
Attunity for mainframe sources using Attunity Connect version 4.6.1
BusinessObjects Mainframe Bulk Interface using IBM WebSphere
Information Integrator Classic Federation for z/OS version 8.2.
BusinessObjects Mainframe Live Interface using the following:
WebSphere Information Integrator Classic Event Publisher version
8.2.
WebSphere MQ for z/OS version 5.3.1
Impact analysis will interoperate with the following:
Crystal Reports version 10 and BusinessObjects XI
BusinessObjects universes 6.1b, 6.5, and XI
Web Intelligence documents versions 6.1b, 6.5, and XI
Business Views XI
Universal Metadata Bridge version 11.0.0 is bundled with Data Integrator
11.0.2. This version of the bridge will interoperate with BusinessObjects version XI. To interoperate with Desktop Intelligence versions 6.1b or 6.5, use the Universal Metadata Bridge version 6.5, which you can download from the Business Objects Electronic Software Download (ESD) Web site.
Data Mart Accelerator for Crystal Reports will interoperate with
BusinessObjects version XI.
Data Integrator Firstlogic data cleansing on Windows and Solaris using:
International ACE version 7.60c
DataRight IQ version 7.60c
Data Integrator Release Not es 5
Compatibility and limitations update
For the latest on support for other platforms, see the Product Availability Report (PAR) on the Business Objects Customer Support web site Supported Platforms page (requires registration) (http://www.techsupport.businessobjects.com/par/Data_Integration_XI/
DI_XI_Include.htm).
Data Integrator uses Apache Axis version 1.1 for its Web Services
support. Axis version 1.1 provides support for WSDL version 1.1 and SOAP version 1.1 with the exception of the Axis servlet engine.
Data Integrator connectivity to Informix servers is now only supported via
the native Informix ODBC driver. The following Informix client SDK versions are required:
Windows: version 2.81.TC3 or higher compatible version
HP-UX for P A-RISC 32 bit: version 2.81.HC3X7 or higher comp atible
version
AIX or SOLARIS: version 2.81.UC3X7 or higher compatible version
Linux version 2.90.UC1 or higher compatible version
Only version 11.0 of the following BusinessObjects Data Integrator
interfaces is compatible with Data Integrator version 11.0.2:
IBM Message Queue (MQ) Series
Java Message Service (JMS); interface 11.0 SP1 required.
Trillium

Data Integrator version 11.0.2.5 does not support

Solaris 7
HP-UX for PA-RISC 32 bit 11.0
AIX 4.3
Windows NT version 4.0
Oracle version 8.0.x client
DB2 UDB for NT and UNIX version 7.2
DB2 for iSeries version 5.1.1
Informix version 9.2 and 9.3
Sybase ASE version 12.0
SAP Business Warehouse (BW) version 1.2
PeopleSoft PeopleTools versions 6.0 and 7.5
Data Mart Accelerator for Crystal Reports interoperating with:
BusinessObjects version 6.5
6 Data Integrator Release Notes
Compatibility and limitations update
Crystal Enterprise version 10
Firstlogic data cleansing using:
International ACE versions 7.25c or 7.50c
DataRight IQ versions 7.10c or 7.50c
Internet Explorer version 5.5
IBM MQSeries version 5.2.1
Informix repository and datastore connectivity via DataDirect
Technologies ODBC. The migration path is to use native Informix connectivity.
Data Integrator mainframe interface using DETAIL. The migration path is
through the IBM Connector and Attunity Connector mainframe interfaces. For migration information and guidance, contact your Business Objects sales person.
LiveLoad
The Debugger feature replaces Data Scan functionality
Attunity for mainframe sources using Attunity Connect version 4.4, 4.5, or
4.6
Context-sensitive Help

Limitations update

Please refer to http://support.businessobjects.com/documentation/ for the latest version of these release notes.
To access the Product Availability Report (PAR) (requires registration), see
http://www.techsupport.businessobjects.com/par/Data_Integration_XI/ DI_XI_Include.htm.
These limitations apply to Data Integrator version 11.0.2.5:
The following product is not currently supported on the IBM AIX platform.
(Please check the Product Availability Report (PAR) on the Business Objects Customer Support Web site for updates on support availability.)
BusinessObjects Data Cleansing
The following products are not currently supported on the HP-UX for PA-
RISC platform. (Please check the Product Availability Report (PAR) on the Business Objects Customer Support web site for updates on support availability.)
BusinessObjects Data Cleansing
BusinessObjects Mainframe Bulk Interface
Data Integrator Release Not es 7
Compatibility and limitations update
BusinessObjects Mainframe Live Interface
The following products are not currently supported on the Red Hat Linux
platform. (Please check the Product Availability Report (PAR) on the Business Objects Customer Support web site for updates on support availability.)
BusinessObjects Application Interface for JD Edwards One World or
World
BusinessObjects Application Interface for Siebel
BusinessObjects Database Interface NCR Teradata
BusinessObjects Technology Interface IBM MQSeries Interface
BusinessObjects Data Cleansing
BusinessObjects Mainframe Bulk Interface
BusinessObjects Mainframe Live Interface
BusinessObjects Technology Interface Trillium
Files read by an adapter must be encoded in either UTF-8 or the default
encoding of the default locale of the JVM running the adapter.
All Data Integrator features are available when you use an Attunity
Connector datastore except:
Bulk loading
Imported functions (imports metadata for tables only)
Template tables (creating tables)
The datetime data type supports up to 2 sub-seconds only
Data Integrator cannot load timestamp data into a timest amp column
in a table because Attunity truncates varchar data to 8 characters, which is not enough to correctly represent a timestamp value.
All Data Integrator features are available when you use an IBM
Connector datastore except:
Datetime and timestamp data types are recognized as varchar,
because IBM's DB2 II Classic Federation for z/OS product does not natively support these types
Bulk loading
Imported functions (imports metadata for tables only)
Importing primary and foreign keys
Template tables
View Data’s Column Profile feature
The Adapter SDK no longer supports Native SQL or Partial SQL.
8 Data Integrator Release Notes
Compatibility and limitations update
Unsupported data type:
Unsupported data type is only implemented for SQL Server, Oracle,
T eradat a, ODBC, DB2, Informix and Sybase ASE, Sybase IQ, Oracle Applications, PeopleSoft, and Siebel.
Data Integrator can read, load, and invoke stored procedures
involving unknown data types for SQL Server, Oracle, Teradata, ODBC, DB2, Informix, Sybase ASE, and Sybase IQ assuming these database servers can convert from VARCHAR to the native (unknown) data type and from the native (unknown) data type to VARCHAR.
Data Integrator might have a problem loading VARCHAR to a
physical CLOB column (for example, bulk loading or auto-correct load could fail).
Note: Use the VARCHAR column in the physical schema for loading.
Unsupported data type is not supported in the SQL transform.
PeopleSoft 8 support is implemented for Oracle only.
Data Integrator jobs that ran against previous versions of PeopleSoft are not guaranteed to work with PeopleSoft 8. You must update the jobs to reflect metadata or schema differences between PeopleSoft 8 and previous versions.
Stored procedure support is implemented for DB2, Oracle, MS SQL
Server, Sybase ASE, Sybase IQ, and ODBC only.
Teradata support is implemented for Windows, Solaris, and AIX. HP-UX
for PA-RISC 32 bit is not supported due to limitations in Teradata’s HP­UX for PA-RISC 32 bit product. Linux is not supported.
On Teradata, the named pipe implementation for Warehouse Builder is supported with Teradata Tools and Utilities version 8 or later compatible version. Teradata Tools and Utilities version 7.0 and 7.1 are not supported with named pipes.
Bulk loading data to DB2 databases running on AS/400 or MVS systems
is not supported.
Data Integrator Administrator can be used on Microsoft Internet Explorer
version 6.0 SP1 or 6.0 SP2 and Netscape Navigator version 7.0.2 (except Netscape is not compatible with the Japanese version). Earlier browser versions may not support all of the Administrator functionality.
Data Integrator’s View Data feature is not supported for SAP R/3 IDocs.
For SAP R/3 and PeopleSoft, the T able Profile t ab and Column Profile tab options are not supported for hierarchies.
Data Integrator Release Not es 9
Compatibility and limitations update
Support for the LOOKUP_EXT function is provided for sources as
follows:
Supported Not supported
DB2 Flat file Informix JDE Memory Datastore ODBC Oracle Oracle Applications PeopleSoft Siebel SQL Server Sybase ASE Sybase IQ Teradata
DB2 Java Library limitations
The Web administrator will not work with a DB2 repository under any of the following conditions:
db2java library is incompatible with DB2 client. For example, DB2
Client is version 8.1 and db2java library is version 8.2, or
db2java library is not generated for JDBC 2 driver, or
the java class path variable is not properly set to the corresponding
java library, or
the DB2 JDBC shared library (e.g. libdb2jdbc.so on AIX) is not
compatible with the Java JDBC 2 driver
Under these conditions, you might see the following behavior:
The Administrator stops working or crashes after you configure the
DB2 repository. Find any error and warning messages related to the DB2 repository configuration in the log file.
When testing your DB2 connection from Administration > Repository
> Specify DB2 database type and clicking [Test], the following errors appear in the Administrator log:
BODI-3016409: fail to connect to repository.
BODI-3013014: didn't load DB2 database driver.
IDOC SAP BW SAP R/3 XML
10 Data Integrator Release Notes
Compatibility and limitations update
In this release, the db2java JDBC 2 driver version of DB2 8.1 and DB2
8.2 are provided. On Windows, find the different version of these libraries $LINK_DIR/ext/lib
db2java.zip (by default) DB2 8.2 version
db2java_8.1.zip DB2 8.1 version
By default, the db2java.zip of version 8.2 will be used. If you run with a DB2 8.1 Client, you must:
replace with version 8.1 of db2java.zip
make sure the compatible DB2 JDBC shared library is in the shared
library path
restart the web server service on Windows or restart the job service
on UNIX
If you run with other DB2 Client versions, you must obtain the corresponding java library with the JDBC 2 driver. Please refer to IBM DB2 documentation for how to obtain the correct java library.
Informix native-driver support on UNIX requires a fix in the Informix Client
SDK for IBM bug number 167964. Please contact IBM for a patch that includes this bug fix.
Note: IBM is targeting inclusion of this bug fix in Client SDK version
2.90.UC1.
Impact Analysis and Metadata Reports are not supported with an Informix
repository in this release.
Data Integrator supports files greater than 2 GB only in the following
cases:
Reading and loading large data files of type delimited and positional.
Generating large files for bulk loader staging files for subsequent
bulk loading by a native bulk loader utility (such as SQL Loader).
Previewing data files from the file format page in Designer when the
large files are on a UNIX Job Server.
Reading COBOL copybook data files.
All other files generated by Data Integrator, such as log files and configuration files, are limited to 2 GB.

Data Integrator on a Japanese Operating System

Please note the following limitations for Japanese operating systems.
Multibyte input
Data Integrator only supports multibyte input in:
Data Integrator Release Not es 11
Compatibility and limitations update
Column mappings
WHERE clauses
Descriptions
Preload and Postload commands in table target
SQL text in SQL transform
Annotations
Job scripts
User script functions
Variable values
Attribute values
Check-in comments for the multiuser feature
Fields where the constant string is expected and quoted
Areas that do not support multibyte input in this release include:
User name and password
All object names (generically called metadata) including but not limited to:
Job Server names
Adapter instance names
Job, work flow, data flow, flat file, DTD, XML schema, COBOL
Copybook, and custom function names
Transforms, scripts, and control flow object names (such as
IFTHENELSE and WHILE)
Datastore, table, column, and all other objects imported from an
external source
Owner names and their aliases
Variable and parameter names
Labels for the multiuser feature
Group and user names for the multiuser feature
Datastore configuration and system configuration names
Extended attribute class names
Real-time/web service names and operation names
Repository names in Administrator and Metadata Reports
RFC client names
Schedule names
LINK_DIR environment variable
12 Data Integrator Release Notes

Software requirements for UNIX

File name and file path
Note: In this release, Data Integrator does not necessarily prevent you from
entering multibyte data in a restricted area.
Delimiters and row markers
Data Integrator supports only ASCII (7-bit) delimiters and row markers in flat­file definitions in this release.
Fixed-width file formats
Data preview in the file format wizard is disabled when you choose a multibyte code page and a fixed-width file type because fixed-width file types are based on the number of bytes, not the number of characters.
Localization
The Designer displays menus, dialogs, and user messages in English or Japanese based on the current user locale in Windows.
Note: The Designer might not display all messages in the language you
specify. Some messages are generated by the database client program (Oracle, DAB, and so on). A small number of Designer messages remain in English, regardless of the current user locale in Windows (Designer locale).
Logs
All logs generated by Data Integrator are in the UTF-8 code page. These logs do not provide a Byte Order Mark (BOM) at the beginning of the file. Some editors might have difficulty reading the logs because they cannot read files encoded in UTF-8 or because they need the BOM to determine byte order.
Software requirements for UNIX
Please observe the following requirements for UNIX systems.
Install JDK (1.3 or higher for AIX; 1.3.1 or higher for Solaris and HP-UX;
1.4.2 or higher for Linux) as described in the vendor’s documentation.
For all versions of AIX, the following file sets must be installed:
File set Level State Description
xlC.adt.include 5.0.2.0 COMMITTED C Set ++ Application
xlC.aix43.rte 5.0.2.0 COMMITTED C Set ++ Run-time for AIX 4.3 xlC.cpp 4.3.0.1 COMMITTED C for AIX Preprocessor
Data Integrator Release Not es 13
Development Toolkit

Kernel parameters and resource limits

File set Level State Description
xlC.msg.en_US.rte 5.0.2.0 COMMITTED C Set ++ Run-time Messages,
xlC.rte 5.0.2.0 COMMITTED C Set ++ Run-time
To find these xlC file sets and their levels on your AIX system, use the command
lslpp -l xlC*
For the following versions of AIX, the following file sets must also be
installed. They support the Data Integrator Web Server.
File sets AIX 5.1 AIX 5.2 AIX 5.3
OpenGL.OpenGL_X.dev.vfb 5.1.0.10 5.2.0.0 N/A X11.vfb 5.1.0.15 5.2.0.0 5.3.0.0
If you are installing the Administrator on HP-UX for PA-RISC 32 bit, add
the following patches:
PHSS_26577
PHSS_26638
Refer to the $LINK_DIR/log/DITC.log file for browser-related errors.
For Red Hat Linux, install the following patches:
glibc-2.3.2-95.30
libstdc++-3.2.3-20
compat-libstdc++-7.3-2.96.122
For Solaris 9, install the appropriate patch:
108434-13 (for 32-bit SPARC)
108435-13 (for 64-bit SPARC)
For Solaris 8, the patch 112874-22 causes a core dump error coming
from the date function calls to the C lib. Apply the latest patch instead.
U.S.English
Kernel parameters and resource limits
The following requirements specific to HP-UX Itanium 64-bit supplement the generic UNIX requirements specified in “Additional system requirements for UNIX” on page 131 of the Data Integrator Getting Started Guide version 6.5.
14 Data Integrator Release Notes

Resolved issues

Kernel parameter Value Comment
maxfiles 2048 Default value of 2048 is appropriate to
use with Data Integrator
max_thread_proc 256 Increase this value if you receive
“Cannot create thread” run-time error message from Data Integrator job.
maxdsiz_64bit 0x400000000 Enter swap space configured on your
system.
maxssiz_64bit 0x10000000 Default value of 256MB is appropriate to
use with Data Integrator.
maxtsiz_64bit 0x40000000 Default value of 1GB is appropriate to
use with Data Integrator.
maxuprc 256 Default value of 256 is appropriate to
start using with Data Integrator. However, if many jobs are expected to run at the same time, then this value should be increased.
nproc 4200 Default value of 4200 is appropriate to
use with Data Integrator.
nfile ((16*(nproc+16+maxusers)/
10)+32+2*(npty+nstrpty+nstrtel)
This value gets calculated by HP-UX automatically.
Resolved issues
This section lists resolved issues in descending order by Business Objects tracking number. Please refer to http://support.businessobjects.com/
documentation for the latest version of these release notes, which includes
resolved issues specific to version 11.0.2.5. .
32134 The lookup_ext functions used in the validation transform were discarding
the CACHE flag. This issue is fixed.
32085 Using an outer join with the table partitioning option generated incorrect
results. This issue has been resolved.
Data Integrator Release Not es 15
Resolved issues
32069 Data Integrator writes st atistics about work flows, data flows, and transforms
into the AL_ST ATISTICS table. Now , Data Integrator writes those st atistics in constant time (under one minute) regardless of the AL_STATISTICS table size or the complexity of your job. This enhancement significantly improves job execution performance.
Also, Data Integrator now provides an way to control the size of your AL_STATISTICS table by disabling transform statistics collection and collecting only work flow and data flow statistics. See the following table.
To use this option, in the Designer go to the Tools menu and select Options > Job Server > General. Then enter the appropriate value for each parameter:
Parameter Value
Section
AL_Engine
Key Disable_Transform_Statistics Value TRUE
32045 A Data Integrator real-time job with an empty GROUP BY clause would
hang. This fix addresses the problem.
31992 In the Administrator you can now successfully restart adapter operations
after stopping them.
31983 The function fiscal_day() returned the wrong value. This issue has been
fixed.
31975 When importing a table, a column with type char was imported as varchar,
resulting in performance issues because of the increase in size. This issue
has been fixed. 31939 The OCI end-of-communication error issue has been fixed. 31936 The Job Server would hang when the Designer computer gets different IP
address between each login session. This problem has been fixed. 31925 When exporting an empty datastore to a BusinessObjects Universe via the
Universal Metadata Bridge, an unclear error message displayed. The
message has been clarified. 31918 ABAP generation caused a core dump with a two-way join and there was a
query with join tables not in the FROM list. This problem has been fixed. 31866 With this change, Data Integrator will always use the maximum length (16)
for packed decimals in all non-SAP functions. 31865 When AL_JobService on UNIX crashed before starting any server
components, the server manager showed incorrect errors. This problem has
been fixed.
16 Data Integrator Release Notes
Resolved issues
31855 Fixed truncation of a string resulting from concatenation of integers. This
happens when the concatenation is pushed down to the database.
31854 On the DB2 bulk loader options page, even when None is selected, some
bulk loader options could be set. This issue has been fixed.
31839 A real-time job died when issuing an XML request. This issue has been
fixed.
31713 Data type conversion only happens for source and target. A flag has been
added that forces conversion of intermediate data types. For information on how to use this flag, please contact Technical Support. Note: Performance will be impacted when this flag is set.
31704 Designer crashed after opening a specific data flow from the repository. This
problem has been resolved.
31670 An access violation occurred when using a SQL transform with variables and
more than 10 columns in the select. This problem has been resolved.
31617 In prior releases, when a job was aborted in the Administrator, an .idx file
was created with a size of 0, resulting in the inability to view the error log file in the Administrator. Beginning with this release, Data Integrator does not create the .idx file for an aborted job, and the error log is viewable.
31560 A job that ran successfully in prior versions of Data Integrator failed after
upgrading to Data Integrator XI with the error:
<R_Change_Planned_Independent_Requirements> is terminated
due to error <210101>>.
31553 When importing from Sybase, output parameters were imported as input
parameters. This issue has been fixed.
31518 T urning on multithreaded reader for a fixed-widt h flat file with a large amount
of data resulted in corrupted data and an error. This problem has been addressed.
31496 If the operating system locale was different from that of the Job Server locale
and the datastore was set to default, then the datastore adopted the operating system locale. With this fix, the datastore will take the Job Server locale as the default.
31480 With this change, Data Integrator now supports New Zealand and Icelandic
regional settings.
31469 When the mail_to function follows an Oracle procedure call in a script, Data
Integrator reported the error
report this error to your software vendor.
Stack overflow has occured. Please
This problem has
been addressed.
31410 Data Integrator displayed the DATE data type in the debugger based on the
operating system locale (for example, the United States regional setting displayed data, you had to use the
4/25/05 12:00:00 AM). However, if you wanted to modify the
YYYY.MM.DD format. This issue has been fixed.
Data Integrator Release Not es 17
Resolved issues
31348 On HP-UX PA-RISC, the Data Integrator engine cannot work with both DB2
and Oracle in the following situations due to symbol resolution problems
within their dynamic client libraries. This issue has been resolved.
Repository database client
version on Job Server
Source/target database client version on Job Server
Oracle 10G DB2 8.2 DB2 8.1 Oracle 9.2 or Oracle 10G DB2 8.2 Oracle 9.2 or Oracle 10G
31272 An Oracle datastore was corrupted when calling an Oracle procedure in
parallel work flows. This problem has been resolved. 31166 Any non-ASCII character that used the eighth bit (such as an umlaut) in pre-
load/post-load in the loader caused the job execution to fail. This issue has
been fixed. 30740 The Siebel metadata browser displayed multiple instances of a table. This
problem has been fixed. 30651 A web-services-related memory leak and time-out issue has been resolved. 30617 On AIX, when the user specified in an R/3 datastore does not have
authorization to log in to SAP R/3 system, the al_engine process crashed
instead of terminating gracefully. This problem has been fixed. 30468 With this change, Data Integrator now supports multiple record types in one
file (support is now provided for multiple 01-level record definitions). You
must upgrade your repository to version 11.0.2 or later to enable this
support. For more information on using this new functionality, see
“Documentation corrections” on page 38.
30213 If a table in the FROM clause is not used anywhere in a query transform,
Data Integrator did not push down the SQL to the database. This problem is
fixed. 30049 With this release, Data Integrator will automatically detect the maximum
number of microseconds accepted by the database when accessing through
Data Integrator's ODBC datastore. When importing a table into an ODBC
datastore, the maximum subsecond information was imported into Data
Integrator. Now, when it comes time to load, Data Integrator generates the
correct number of subseconds. If you have an existing job that failed
because of this bug, please reimport your loader tables and run the job
again.
18 Data Integrator Release Notes
Resolved issues
29738 When executing an R/3 data flow, the following warning message displayed:
ABAP code from a previous version of Data Integrator <6.5.1.0> is used for program file ...
This issue has been resolved.
28602 Passing global variables over 50 characters long via the command line did
not work. This issue has been fixed.
26849 For XML files and messages, Data Integrator did not support BOM
characters. This issue has been fixed.
23836 When using Table Comparison, the output schema now refreshes when
selecting a new table.
Data Integrator Release Not es 19

Known issues

Known issues
This section lists known issues in descending order by Business Objects tracking number. Please refer to http://support.businessobjects.com/
documentation for the latest version of these release notes.
32202 The supported Firstlogic input fields do not include any fields introduced in
Firstlogic version 7.50 or 7.60.
31908 A job fails when Oracle loader has a column with data type of long and the
Use overflow file flag is set to true.
31165 Data Integrator does not support any non-ASCII characters that use the
eighth bit (such as an umlaut) in metadata (such as the owner name).
31068 Data flows with more than one independent flow will report original sources
for every independent flow, not just the flow with target table. The Data Integrator language records usage information at the data flow level, not at individual flows within that data flow. As a result, more than one original source will be listed.
30925 When using Attunity CDC, import from browser and import by name behave
differently. For example, when you type either
CDC_DSNX4612 as the owner name, the same table will be imported into the
repository.
Tip: When you import a table by name, always use the full owner name
such as
30506 Due to an issue in Attunity software, when using Attunity Connect to load into
a DB2 table on MVS that contains varchar columns larger than 255, the data inserted into those columns is always an empty string regardless of what you actually insert.
30059 A datastore in a nonsecure central repository can be deleted even if it has a
table that is checked out by other users.
29881
29826
28654
Replicating a job and then comparing the original with the copy gives an error and crashes the Designer.
Tip: Save all objects before performing the comparison.
When adding a very large project to central repository, a large memory leak occurs.
Tip: After adding large projects to central repository, restart the Designer.
Access server crashes on Windows 2003 machines when hyperthreading is activated.
CDC_DSNX4612:PUBLIC and also define the alias for this full name.
CDC_DSNX4612:PUBLIC or
20 Data Integrator Release Notes
Known issues
28630
28611
28610
28496
28399
28356
A real-time engine memory leak occurs when “enable validation” is activated in the XML reader.
When using Data Mart Accelerator for Crystal Reports to run complex reports or those that contain a very large amount of data, the batch job created by the Data Mart Accelerator might run out of TCP ports and the adapter will hang and ultimately be terminated by Data Integrator (after the default allowance for 10 minutes of inactivity).
This problem occurs when the Crystal Report Application Server (RAS) runs out of available ports. On a typical Windows installation, TCP ports above 5000 might not be available.
For information on how to change the number of available ports, see Microsoft Knowledge Base Article 196271 “Unable to Connect from TCP Ports Above 5000.” To further reduce the load on the RAS server, you can also rerun the Data Mart Accelerator and set the maximum number of instances to a lower number.
When using the Data Mart Accelerator for Crystal Reports, numbers that appear in the Link field are rounded to six digits.
Due to an Oracle issue, when connecting to an Oracle 10g database from a different client system (for example, the client is on Windows and the server is on UNIX) and a table is created with a char column greater than 250 characters [for example char(255)] followed by a number column, the number values returned from the query are incorrect or the query will hang due to Oracle tracking number 3417593. Contact Oracle support for a patch compatible with Oracle 10.1.0.2.0 or higher, which addresses this issue.
Tip: Create a number column before the char column when the char column
size is greater than 250 or use varchar2 instead of char for larger columns. On UNIX when running a Data Integrator job with an Attunity datastore, the
job fails with following error:
navroot/def/sys System error 13: The file access permissions do not allow the specified action.; (OPEN)
[D000] Cannot open file /usr1/attun/
This error occurs because of insufficient file permissions to some of the files under the Attunity installation directory. To avoid this error, change file permissions for all files under the Attunity directory to command from Attunity installation directory:
chmod a+rw *.
777. Run the following
When you import a table by name and this table is the target table of a table comparison transform, the table comparison does not refresh the database changes.
Tip: Open the table comparison transform to refresh the output schema.
Data Integrator Release Not es 21
Known issues
28161
27974
27726
27390
27342
27048
27045
26761
The SAP function RFC_READ_TABLE does not work correctly with SAP
4.70 when reading source data if one of the table fields is a floating-point field. This issue will be addressed in a later release.
On Solaris, Informix’s client driver does not support the ONSOCTCP communication protocol. Until Informix provides a fix, Business Objects recommends you use the ONTLITCP communication protocol.
By default, Data Integrator uses the ONSOCTCP communication protocol. Therefore, on Solaris, you must change the communication protocol configuration found in the Informix SQLHOSTS and the ODBC.ini files to use either ONTLITCP or SETLITCP communication protocols.
A Sybase job server name entered into the Designer must be in the same case (upper/lower) as the associated value in the SYBASE_Home\interfaces file. If the case does not match, an error such as the following will occur because the Job Server cannot communicate effectively with the repository:
(6.1) 03-02-04 17:18:42 (E) (12792:0001) CON-120505: Sybase connection error: <Sybase CS Library message number <101188867> Severity <5>: Message Text is: ct_connect(): directory service layer: internal directory control layer error: Requested server name not found.>.
When using the Get by label option of the multi-user development interface with a non-existent label, Data Integrator returns the latest version of the objects.
Data Integrator takes too long to return data from an SAP R/3 table to the Profile tab of the View Data feature.
Column Profiling is not supported in the View Data feature on SAP R/3 cluster and pool tables.
Multibyte strings are not handled correctly when passed as a function parameter.
Tip: Put multibyte string parameters in quotes.
Data Integrator handles trailing spaces in varchar fields inconsistently. Sometimes, Data Integrator preserves the trailing spaces (for example, when varchar fields are mapped directly to other varchar fields and no other functions are present in the query). Other times, Data Integrator strips spaces from strings.
Tip: Use the function rpad_ext to ensure fields always have the correct
number of trailing spaces if the length of the field must be consistent.
22 Data Integrator Release Notes
Known issues
26355
26164
26005
25815
25713
25578
When comparison tables are on Microsoft SQL Server, the Table Comparison transform’s Sorted input option generates incorrect results. The Data Integrator engine runs in binary collation while the Microsoft SQL Server database runs in a Latin1 collation sequence. As a result, Microsoft SQL Server generates the data in a different sort order. This issue will be addressed in a later release.
Tip: Use the Row-by-row select or Cached comparison table option in
the Table Comparison transform. When checking in or out operations to the central repository, there is a
memory leak.
Tip: Restart the Designer.
You cannot add a comment about a column mapping clause in a Query transform. For example, the following syntax is not supported on the Mapping property sheet:
table.column ## comment
The job will not run and you cannot successfully export it.
Tip: Use the object description or workspace annotation feature instead.
In the Designer, when inserting, updating, or deleting metadata for a table column, the information value is not saved.
Tip: In the Class Attributes tab of a table’s Properties dialog, add new
class attributes. Close all windows and restart the Designer. In the Attributes tab of a table’s Properties dialog, specify the values for the new class attributes, then click OK.
Y ou may get the following error during execution of a Data Integrator job that contains an Oracle stored procedure call <Schema2>.<Func> and the user name of the datastore that contains the function is <Schema1>:
DBS-070301: Oracle <ora_connection> error message for operation <OCIStmtExecute>: <ORA-06550: PLS-00225: subprogram or cursor '<Schema2>' reference is out of scope ORA-06550: line 1, column 7: PL/SQL: Statement ignored>.
Tip: Check the schema <Schema1> to which the Oracle datastore belongs
for any procedure with the name <Schema2>. If the procedure exists, rename it.
When processing large MQ messages, you must use IBM's AMI repository. If you do not use the AMI repository, the PUT operation could fail.
Data Integrator Release Not es 23
Known issues
24746
24379
24223
24195
24013
Data Integrator cannot import metadata through an adapter if it includes a DTD or XML Schema that begins with the header
8"?>
.
<?xml encoding="UTF-
Tip: Remove the header line.
Due to an AIX software issue, jobs running on AIX that call the system_user_name() function from a transform consume too much memory.
Tip: Ini tiate a global variable to read the system_user_name() and use the
global variable in a transform or install a version of libc containing APAR IY45175 from IBM.
When bulk loading to Oracle using the API method, Data Integrator cannot load a case-sensitive partitioned target table name due to a problem in Oracle.
The following error message displays:
ORA - 02149. The specified partition does not exist.
Tip: Use the File method and select the Direct path option to handle case-
sensitive partition names. For the DB2 database, when the Data Integrator engine pushes down a
SELECT statement and the SELECT statement has an upper/lower function
graphic/vargraphic data type, then the describe column (ODBC
for a library call) returns a
char data type instead of a graphic data type. This is
an ODBC bug.
Tip: Set the datastore’s code page to
utf8 (or a similar superset code
page). When you specify
length (lpad(' ',41) for the varchar data type, it
returns different lengths when it is pushed down to an Oracle database if the
bind column is NCHAR or NVARCHAR2. The issue is with the Oracle OCI
client.
Tip: Do not use Data Integrator’s Auto-correct load option (on the target
table editor) for an Oracle table when you want to specify
',41)
for the varchar data type.
length (lpad('
24 Data Integrator Release Notes
Known issues
23941
23938
23852
23811
23687
or Oracle, the load trigger for an code page in the database does not support these characters in the pushdown SQL. Consequently , Oracle fails to execute the SQL properly. For example, the following situations are known to cause problems:
NCHAR or NVARCHAR data type fails if the
The database code page is english, we8ISO88591
The code page of the NCHAR column in the database is ucs2
The submitted SQL has Japanese characters
Tip: Do not use Data Integrator’s Auto-correct load option (on the target
table editor) when processing If the code page used by the Oracle client is set to certain values (it works for
utf8 and fails for we8winms1252), and the bind column size is greater than 1312, and Data Integrator uses the same Oracle connection more than twice
during internal processing, then Oracle will throw the following error:
ORA-01460: unimplemented or unreasonable conversion
requested
For MS SQL Server, if a concatenation operator is used in a stored procedure and if the code page for the database is library fails to concatenate the data properly. The issue is with the Microsoft ODBC driver.
Tip: Do not allow Data Integrator to push down a concatenation expression
to the database. One way to do this is to construct job so that data loads into a flat file and the concatenation operation occurs in another query. Thus, Data Integrator cannot push down the SELECT statement because the target does not have the operation it needs.
For Oracle, nchar or nvarchar data types do not work for database procedures and database functions when one of the parameters has an nchar or nvarchar column and the datastore code page set in Dat a Integrator does not understand nchar or nvarchar encoding. In these cases, the Data Integrator engine handles nchar or nvarchars as if they were varchar data types.
After you restart an adapter instance, the service that uses it fails to process the next message it receives.
Tip: When you restart an adapter instance, also restart its associated
services.
NCHAR or NVARCHAR data types.
ucs2, then the ODBC
Data Integrator Release Not es 25
Known issues
23005
22821
22690
21659
When reading XML schema data, you may encounter the following error message:
Type of attribute 'lang' must be validly derived from type of attribute in base...
Tip: Change your schema to avoid use=”prohibited” for any attribute or
element whose type is derived from
xsd:language.
You might see an error message when you try to import an XML file through the Metadata Exchange CWM interface when the XML file was generated by Data Integrator.
Tip: Delete the line in the file that refers to
CWM_1.0.DTD before importing.
If you have a data flow that includes mapping a varchar column to a datetime column in a query transform, the job will give an access violation error if any of the date values are invalid.
The Adapter SDK has some syntax limitations for DTDs that can be imported for use as an adapter source or target:
Conditional syntax cannot be included in the DTD file. Sample syntax
that fails:
<!-- Common ebXML header--> <!ENTITY % INCLUDE_EBHEADER "INCLUDE"> <![ %INCLUDE_EBHEADER;
[<!ENTITY % eb:MessageHeader SYSTEM "ebHeader.dtd">%eb:MessageHeader;]]>
Tip: The following syntax is supported and produces the same result:
<!ENTITY % eb:MessageHeader SYSTEM "ebHeader.dtd"> %eb:MessageHeader;
XML version information cannot be included in the DTD file. Sample
syntax that fails:
"<?xml version="1.0" encoding="UTF-8"?>"
Tip: Eliminate the line from the file.
Note: These limitations only apply to DTD files read by an adapter, not
those read by the Designer. It is important to note that the only file the adapter reads is the one configured in the adapter operation. Adapters do not read embedded DTD files.
26 Data Integrator Release Notes

Migration considerations

19893
16806
The mail_to function does not work on Windows 2000 after downloading a security patch for Microsoft Outlook.
Tip: To send e-mail out from Data Integrator af ter inst alling a security patch,
follow the instructions from the following web site:
http://support.microsoft.com/default.aspx?scid=kb;en­us;Q263297&id=263297&SD=MSKB#OB
The instructions on the web site show how to suppress prompting when sending e-mail from a particular computer. First a security policy file is created on the Exchange Server. Second, the registry key is added to a client machine that tries to send e-mail programmatically . This client machine is the machine on which you are running the Data Integrator Job Server. When a client machine tries to send an e-mail, the Outlook client first checks the registry key. If this key is set, Outlook looks at the security policy file on the exchange server. In the security policy file, turn off prompting and the Data Integrator mail_to function will work.
For the SAP R/3 environment settings, when Convert R/3 null to null is checked, all blank fields convert to null. If Convert R/3 null to null is clear, the null date converts to 01-01-1900.
Migration considerations
Data Integrator version 11.0 will only support automated upgrade from ActaWorks and Data Integrator versions 5.2.0 and above to version 11.0. For customers running versions prior to ActaWorks 5.2.0 the recommended migration path is to first upgrade to Data Integrator version 6.5.1.
If you are upgrading from an earlier version of Data Integrator to version
11.0.2.5 or later, you must upgrade your repository to take advantage of a change in the COBOL copybook file reader, which now supports multiple record types in one file (for multiple 01-level record definitions). For a description of this new feature, see “Documentation corrections” on page 38.
If you are upgrading from Data Integrator version 11.0.0 to Data Integrator version 11.0.1 or 11.0.2, you must upgrade your repository to use the following:
Sybase IQ as a source or target
New Data Integrator functions:
replace_substr_ext
long_to_varchar
varchar_to_long
Data Integrator Release Not es 27
Migration considerations
Generate delete row types in table comparison transform
COBOL data file reader
Impact Analysis in Metadata Reports for BusinessObjects XI
Impact Analysis in Metadata Reports for Business Views
Auditing (some additional functionality was introduced in version 11.0.1)
For a description of these new features, please refer to the Data Integrator Getting Started Guide.
The following sections describe changes in the behavior of Data Integrator from previous releases. In most cases, the new version avoids changes that would cause existing applications to change their results. However, under some circumstances a change has been deemed worthwhile or unavoidable.
This section includes:
Brief installation instructions
Changes to code page names
Crystal Enterprise adapters
Data cleansing
Browsers must support applets and have Java enabled
Execution of to_date and to_char functions
Changes to Designer licensing
License files and remote access software
Administrator repository login
Administrator users
Web Services support
Sybase ASE bulk loader library on UNIX
Informix native client support
Statistics repository tables
Teradata named pipe support
load_to_xml
extract_from_xml
match_pattern
match_regex
literal
28 Data Integrator Release Notes

Brief installation instructions

Complete installation instructions are in the Data Integrator Getting Started Guide (for the stand-alone version, open the DIGettingSt artedGuide.pdf in the
Doc\Books folder on the installation CD).
To install without a previous version of Data Integrator:
Follow the instructions in the Data Integrator Getting Started Guide to create a database location for a Data Integrator repository, run the installation program, and register the repository in the Data Integrator Administrator.
To install with an existing version of Data Integrator:
1. Using your existing Data Integrator or ActaWorks version, export the
existing repository to a file as a backup.
2. Create a new repository and import the ATL file created in step 1.
3. T o use configuration settings from the previous version of Dat a Integrator
or ActaWorks, create the new installation directory structure and copy the following files from your Data Integrator or ActaWorks installation into it. The default Windows installation structure is:
drive\Program Files\Business Objects\Data Integrator
version
To use Copy
Administrator settings \conf directory Repository, Job Server, and
Designer settings
Migration considerations
\bin\dsconfig.txt
4. Uninstall your previous version of Data Integrator or ActaWorks.
Business Objects recommends uninstalling, particularly when you are upgrading from ActaWorks to Data Integrator.
5. Install Data Integrator in the default location given by the install program.
If you set up an installation path in step 3, use that path.
6. During the installation, upgrade the existing repository to 11.0.1.
7. If you used the configuration settings from the previous version, validate
that the existing Job Server and Access Server configuration appears when you reach the server configuration section of the installer.
8. After the installation is complete, open the Administrator at the browser
location
http://<hostname>:28080
9. Re-register (add) the repository.
Data Integrator Release Not es 29
Migration considerations

Changes to code page names

Starting with Version 11.0, Data Integrator displays some code pages with different names. For example, code page MS1252 displays as CP1252. Additionally, two code pages are no longer supported. These changes are visible only in the list locales screens (in installer, datastore and file-formats). The Data Integrator engine does not have any upgrade issues and understands the old code page name.
Old code page name New Name (Data Integrator
iso885910 Not supported MS949 CP949 MS950 CP950 MS1250 CP1250 MS1251 CP1251 MS1253 CP1253 MS1254 CP1254 MS1255 CP1255 MS1257 CP1257 MS874 CP874 MS1258 CP1258 MS1361 Not Supported USASCII US-ASCII
version 11.0 and later)

Crystal Enterprise adapters

After installing a new version of Business Object s, you might receive a class
not found
update the Crystal Enterprise adapter configuration by updating the BusinessObjects directory name in the classpath for both the Crystal Enterprise adapter and the DMAC command file.
To update classpaths
1. Locate the new jar files. The default locations are:
Crystal Enterprise 10: C:\Program Files\Common Files\Crystal Decisions\2.5\java\lib
30 Data Integrator Release Notes
error when trying to start Crystal Enterprise adapter. You must
Migration considerations
BusinessObjects XI: C:\Program Files\Common Files\ Business Objects\3.0\java\lib
2. For the Crystal Enterprise Adapter: a. To ensure any new adapter instances have the correct classpath,
modify LINK_DIR\adapters\install\CrystalAdaper .xml with the new jar file location. Modify the element <adapter3PartyJarFiles> by globally changing the path to the new jar files for Crystal Enterprise 10 (see the default location noted in the previous step) to that of the BusinessObjects XI path.
b. Save the file. Any future adapter instances created will now have the
correct jar file path.
c. Change any configured adapter instances by deleting the configured
adapter instance and creating a new one using the same adapter instance name, which keeps all adapter usage valid.
3. For the DMAC command file launched by the DMAC wizard: a. Edit LINK_DIR\ext\dmac\wizard.com. b. Change the entry set
example, if you installed BusinessObjects XI, the line should read:
set CE_JARS_DIR=C:\Program Files\Common
Files\Business Objects\3.0\java\lib
CE_JARS_DIR= to point to the new path. For

Data cleansing

The new Data Integrator installation will overwrite all template job files shipped with Data Integrator with the upgraded job files compatible with Firstlogic 7.60. If the user had manually modified the template job files with Data Integrator version 6.5 and wants to preserve the changes, Business Objects recommends that a copy is made of the job file, or that the job file is renamed, prior to installing Data Integrator version 11.0.
After installing Data Integrator version 11.0 and the Firstlogic 7.6 suite, you should do the following:
From the Data Integrator CD, copy pwdiqjob.upd from the following
For all your job templates, run Firstlogic's edjob utility to upgrade your
directory:
Windows: \Utilities
UNIX: /unix/PLATFORM
to the dtr_iq directory. Click Yes to overwrite existing file.
Firstlogic job files from the previous version to 7.60. Find the edjob utility under the installation of Firstlogic. Run the utility to upgrade all jobs of the
Data Integrator Release Not es 31
Migration considerations
same extension by specifying *.extension in the utility. Extension names are .diq, .iac, and .mpg. For instructions on how to use the edjob utility, refer to Firstlogic documentation.
Note: If the jobs have already been updated to 7.60c (using the .upd file
sent with the master install with edjob), you will receive the following error when trying to open a 7.60 job:
Error: The “Report Defaults” block, occurrence 1, has an
incorrect number of lines and the job cannot be loaded.
These jobs will need to be manually edited to change the version string in the General block from the previous version to 7.60c. Then edjob can be rerun with the correct .upd file.
For all job templates created under the address enhancement
application, manually edit the job templates using an editor such as Notepad, and comment out the “Create Output File” block by adding the '*' character at the beginning of the line. For example:
*BEGIN Create Output File

Browsers must support applets and have Java enabled

The Administrator and the Metadata Reporting tool require support for Java applets, which is provided by the Java Plug-in. If your computer does not have the Java plug-in, you must download it and make sure that your security settings enable Java to run in your browser. Business Objects recommends you use Java plug-in 1.3.1.
To find a plug-in, go to: http://java.sun.com/products/plugin/downloads/ index.html.

Execution of to_date and to_char functions

Prior to release 6.0, Data Integrator did not always push the functions to_date and to_char to the source database for execution when it was possible based on the data flow logic. The fix for this improved optimization requires that the data format specified in the function match the format of the extracted data exactly. The result can be significant performance improvement in flows that use these functions.
It is possible that your existing function calls do not require a change because the format already matches the data being extracted. If the formats do not match, this change is required. If your function calls do not correspond to this new format requirement, you will see an error at run time.
32 Data Integrator Release Notes
Migration considerations
Compare the examples below with your to_date or to_char function calls to determine if a change is needed.
For example, the WHERE clause of the query transform in the DF_DeltaSalesOrderStage of the Customer Order Rapid Mart (was eCache) includes the following function calls:
WHERE (to_date(to_char(SALES_ORDER_XACT.LOAD_DATE,
'YYYY.MM.DD HH24:MI:SS'), YYYY.MM.DD')
= to_date(to_char($LOAD_DATE, 'YYYY.MM.DD HH24:MI:SS'),
'YYYY.MM.DD'))
To support this chance, the WHERE clause needs to be changed to:
WHERE (to_date(to_char(SALES_ORDER_XACT.LOAD_DATE,
'YYYY.MM.DD'), 'YYYY.MM.DD')
= to_date(to_char($LOAD_DATE, 'YYYY.MM.DD'),
'YYYY.MM.DD'))
Also, in the Sales Rapid Mart in the DF_SALESOrderHistory data flow, in the mapping column, VALID_FROM is set as:
to_date(to_char($G_LOAD_DATE, 'YYYY.MM.DD HH24:MI:SS'),
'YYYY.MM.DD')
To support this change, the mapping needs to be changed to:
to_date(to_char($G_LOAD_DATE, 'YYYY.MM.DD'),
'YYYY.MM.DD')
Similarly, the SELECT list of the ConvertTimes query transform in the DF_Capacity_WorkCenter data flow of the Profitability Rapid Mart, contains the following function call(s):
to_date(to_char($G_LOAD_DATE, 'YYYY.MM.DD') ||
'00:00:00'),
'YYYY.MM.DD HH.MI.SS')
The function calls need to be changed to:
to_date((to_char($G_LOAD_DATE, 'YYYY.MM.DD') ||
'<space>00:00:00'), 'YYYY.MM.DD H24:MI:SS')

Changes to Designer licensing

Data Integrator Designer versions 6.5 and above do not support floating licenses. If you are currently using floating Designer licenses, obtain a new license key before upgrading. Send your request to licensing@businessobjects.com with Data Integration License Keys as the subject line. The new key will be added to your existing License Authorization Code.
For more information, visit the Customer Support Web site at http://
www.techsupport.businessobjects.com, search for “License Keys,” and click
the link for Article # 2674.
Data Integrator Release Not es 33
Migration considerations

License files and remote access software

In release 6.0 and higher, Data Integrator includes a specific change to license support when activating Data Integrator components across a terminal server, such as PC Anywhere or Terminal Services Client. You must request new license files to use release 6.x or above when using a terminal server. Specifically, you require:
New Designer licenses to use a terminal server to initiate a Designer
session across a terminal server connection (when using evaluation or emergency licenses, or node-locked licenses)
New Job Server and Access Server licenses to open the Service
Manager Utility across a terminal server connection (when using any type of license)
If you do not make the license change, you may see the error
Server remote client not allowed.
contact licensing@businessobjects.com.

Administrator repository login

In release 6.0 and higher, in the Data Integrator Administrator, all the repository connectivity occurs using JDBC and hence the previous connectivity information is either invalid or incomplete. When you use the Administrator, you will need to re-register all of your repositories.
Use the following connection information guidelines:
Terminal
If you require terminal services,
Database type
All types Repository name Any logical name.
Oracle Service name/
DB2 Datasource Datasource name as configured in DB2 client configuration
Sybase ASE
34 Data Integrator Release Notes
Connection field
Machine name Machine name on which the database server is running Port Port on which the database server is listening. The
SID
Database name Database instance name.
Description
Administrator provides default values; if you do not know the port number, use the default.
The database instance name to connect to. Depending on whether you are using an 8.0.2 or 8i database, it is either the SID or the Service name.
utility. This is the same as in previous Administrator or Designer connections.
Migration considerations
Database type
Microsoft SQL Server
Informix Server name The name of the machine on which the server is running.
Connection field
Database name Database instance name.
Database name The name of the database instance running on the Server.
Description

Administrator users

Administrator user configuration information in ActaWorks version 5.x was saved in a primary repository. When you upgrade to Data Integrator version
6.x or above, this information is lost. You must reconfigure your Administrator
users manually.

Web Services support

When using Data Integrator with Web Services, the following limitations exist:
SAP R/3 IDoc message sources in real-time jobs are not supported
The Data Integrator Web Services server uses the first Job Server in the
list of those available for a batch job. The first Job Server might be a server group.
If you are using Web Services and upgrading to versions 6.5.0.1 or
above, Web Services server (call-in) functionality that you created with previous versions of Data Integrator is not supported. Perform the following changes to manually upgrade the Web Services server:
Regenerate a WSDL file
Prior to release 6.5.0.1, every batch job and real-time service configured in Data Integrator was published in the WSDL file. In release 6.5.0.1 and above, Data Integrator only publishes batch jobs and real-time services that you select. Clients that call Data Integrator using Web Services will fail until you authorize the jobs and services being invoked.
Change SOAP calls to match those in the 6.5.0.1 and above
versions. A session ID is not included in SOAP messages unless you enable
security for the Web Services server. Prior to release 6.5.0.1, the sessionID message part was published in every SOAP message.
To fix this problem, change SOAP calls to match those in the 6.5.0.1 or above version of the WSDL file.
Data Integrator Release Not es 35
Migration considerations
If security is off, remove the sessionID line from the SOAP input
message.
If security is on, copy the sessionID from the SOAP body to the
SOAP header as a session ID now appears in the SOAP header instead of the body.
Note: When you install 6.5.0.1 or above, the installer automatically
upgrades the Web Services Adapter and your existing Web Services clients will function properly.

Sybase ASE bulk loader library on UNIX

The installed Sybase ASE client on UNIX platforms does not include the Sybase bulk loader dynamic shared library that Data Integrator requires. Please refer to the README in either the CDROM Location/unix/PLATFORM/ AWSybase directory or the $LINK_DIR/AWSybase directory for instructions about how to:
Confirm that the library is available in the Sybase client installation
Build the library

Informix native client support

If you created an Informix repository and datastores with a previous Data Integrator release using DataDirect Technologies Connect ODBC driver, they are not compatible with Data Integrator 6.5.1 and above. You must recreate your repository, datastores, and jobs if you want to use the Informix native driver.
36 Data Integrator Release Notes

Statistics repository tables

The repository tables AL_FLOW_STAT and AL_JOB_STAT are no longer populated for job, work flow, or dat a flow statistics. These two tables no longer exist in the newly created Data Integrator version 11.0.1 repositories. They could still exist in the repositories that are upgraded to version 11.0.1.
Beginning with Data Integrator version 1 1.0.1, all job, work flow, and data flow statistics are stored in the AL_HISTORY, AL_HISTORY_INFO, and AL_STATISTICS tables. In particular, the job statistics, previously stored in AL_JOB_ST A T, are now stored in the AL_HISTORY and AL_HIST ORY_INFO tables. The work flow and data flow statistics, previously stored in AL_FLOW_STAT are now stored in the AL_STATISTICS table. Use the metadata reporting tool to retrieve all job, work flow and data flow statistics.

Teradata named pipe support

Beginning with Data Integrator version 11.0.2.5, you can use named pipes to bulk load data into Teradata databases. When Data Integrator tries to connect to the pipes, Teradata Warehouse Builder might not yet have created the pipes. Data Integrator waits one second and retries to connect, up to 30 seconds. If you need to change the 30-second wait time, go to the Designer select Tools > Options > Job Server > General window, enter for Section, enter
If you use Warehouse Builder and Teradata Tools and Utilities version 7.0 or
7.1, go to the Data Integrator Designer Tools > Options > Job Server >
General window, enter Key, and set the value to
NamedPipeWaitTime for Key, and enter a different value.
al_engine for Section, enter PreTWB5_Syntax for
TRUE.
Migration considerations
al_engine
Data Integrator Release Not es 37

Documentation corrections

Documentation corrections
Please note the following last-minute corrections to the Data Integrator version 11.0.1 and later documents. These changes will be incorporated into a future release of the manual set: The numbers refer to Business Objects tracking numbers.

Help in HTML format

Note that the first chapter of several Data Integrator manuals provides product documentation access information. The Data Integrator manual set no longer includes information about context-sensitive online Help in HTML format. As of version 11.0.0.0, the context-sensitive Help feature is not being supported.

“More Data Integrator product documentation” sections

In the first chapter of each book in the “More Data Integrator product documentation” or “Business Objects information resources” sections, after the first three bullets, replace the rest of the text with the with the following:
After you install Data Integrator (with associated documentation), you can view the technical documentation from several locations. To view documentation in PDF format:
Click Start > Programs > BusinessObjects Data Integrator version >
Data Integrator Documentation and select:
Release Notes—Opens this document, which includes known and
fixed bugs, migration considerations, and last-minute documentation corrections
Release Summary —Opens the Release Summary PDF, which
describes the latest Data Integrator features
Technical Manuals—Opens a “master” PDF document that has
been compiled so you can search across the Data Integrator documentation suite
Tutorial—Opens the Data Integrator Tutorial PDF, which you can
use for basic stand-alone training purposes
Select one of the following from the Designer’s Help menu:
Release Notes
Release Summary
Technical Manuals
38 Data Integrator Release Notes
Tutorial
Other links from this menu include:
DIZone—Opens a browser window to the DI Zone, an online
Knowledge Base—Opens a browser window to Business Objects’
Select Help from the Data Integrator Administrator to open Technical
Manuals.
To review Data Integrator documentation for previous releases (including Release Summaries and Release Notes), visit the Business Objects Customer Support page at http://www.techsupport.businessobjects.com/ (access requires registration), select the version from the top menus, and click Documentation.

Designer Guide

31445:
Chapter 14, Design and Debug “Defining audit points, rules, and action on failure” section “To define auditing in a data flow” procedure
Step 3 (Define audit rules) incorrectly refers to the source table editor. Therefore, replace the steps 3a and 3b with the following single step:
3. If you want to compare audit statistics for one object against one other
object, use the expression editor, which consists of three text boxes with drop-down lists.
a. Select the label of the first audit point in the first drop-down list. b. Choose a Boolean operator from the second drop-down list. The
c. Select the label for the second audit point from the third drop-down
Documentation corrections
resource for the Data Integrator user community)
Technical Support Knowledge Exchange forum (access requires registration)
options in the editor provide common Boolean operators. If you require a Boolean operator that is not in this list, use the Custom expression box with its function and smart editors to type in the operator.
list. If you want to compare the first audit value to a constant instead of a second audit value, use the Customer expression box.
Data Integrator Release Not es 39
Documentation corrections
31274:
Chapter 19, Techniques for Capturing Changed Data “Configuring a (mainframe or SQL Server) CDC source” sections for
Oracle, mainframe (Attunity), and Microsoft SQL Server
For each CDC description (Oracle, Attunity, and Microsoft SQL Server), replace steps 2 and 3 with the following text:
2. Click the name of this source object to open its Source Table Editor.
3. Click the CDC Options tab.
The CDC Options tab in the Source Table Editor shows the following three options.
Note: You must specify a value for the first option, CDC Subscription
name.
Add the following note to the end of the description for the CDC subscription name option in the options table:
Note: This CDC Subscription name option is required.
30468:
Chapter 6, File Formats “Creating COBOL copybook file formats” section “
T o create a new COBOL copybook file format T o create a new COBOL copybook file format and a data file” procedure
After the Click OK step, add the following steps to both procedures:
1. The COBOL Copybook schema name(s) dialog box displays. If
desired, select or double-click a schema name to rename it.
2. Click OK.
Add the following procedure to the end of the “Creating COBOL copybook file formats” section:
The Field ID tab allows you to create rules for indentifying which records represent which schemas.
” and
To create rules to identify which records represent which schemas
1. In the local object library, click the Formats tab, right-click COBOL
copybooks, and click Edit.
The Edit COBOL Copybook window opens.
2. In the top pane, select a field to represent the schema.
3. Click the Field ID tab.
40 Data Integrator Release Notes
4. On the Field ID tab, select the check box Use field <schema name.field
name> as ID.
5. Click Insert below to add an editable value to the Values list.
6. Type a value for the field.
7. Continue (adding) inserting values as necessary.
8. Select additional fields and insert values as necessary.
9. Click OK.

Performance Optimization Guide

31696:
Chapter 4, Using Bulk Loading “Bulk loading in Teradata” section
Replace this entire section with the following text and subsections: Data Integrator supports bulk loading with the Teradata Warehouse Builder
application as well as with Teradata load utilities. For detailed information about Teradata bulk loader options and their behavior
in the Teradata DBMS environment, see the relevant Teradata product documentation.
Data Integrator supports multiple bulk loading methods for the Teradata database. From the Bulk Loader Options tab of your Teradata target table editor, select one o f these methods depending on your Teradat a environment:
Warehouse Builder method
Load Utilities method
None (use ODBC to load Teradata)
Documentation corrections
Data Integrator Release Not es 41
Documentation corrections

When to use each Teradata bulk load method

Data Integrator supports multiple bulk loading methods for Teradata on Windows and UNIX. The following table lists the methods and file options that you can select, depending on your requirements.
Bulk loader method
Warehouse Builder
Warehouse Builder
File Option Advantages Restrictions
File — Loads a
large volume of data by writing to a data file that it passes to the Teradata server.
Generic named pipe — Loads a
large volume of data by writing to a pipe from which Teradata reads.
Can use Data Integrator
parallel processing.
Data Integrator creates
the loading script.
Provides a fast way to
bulk load because:
As soon as Data
Integrator writes to a pipe, Teradata can read from the pipe.
Can use Data
Integrator parallel processing.
On Windows, no I/O
to an intermediate data file occurs because a pipe is in memory
The Teradata Server,
Tools and Utilities must be Version 7.0 or later.
If you use TTU 7.0 or
7.1, see “Migration
considerations” on
page 27.
A job that uses a
generic pipe is not restartable.
The Teradata Server,
Tools and Utilities must be Version 7.0 or later.
If you use TTU 7.0 or
7.1, see “Migration
considerations” on
page 27.
Data Integrator creates
the loading script.
42 Data Integrator Release Notes
Documentation corrections
Bulk loader method
Warehouse Builder
Load utilities File — Loads
File Option Advantages Restrictions
Named pipe access module
— Loads a large volume of data by writing to a pipe from which Teradata reads.
data by writing to a data file that it passes to the Teradata server.
The job is restartable. For
details, see “Automatically recovering jobs” in the Data
Integrator Designer Guide.
Provides a fast way to
bulk load because:
As soon as Data
Integrator writes to a pipe, Teradata can read from the pipe.
Can use Data
Integrator parallel processing.
On Windows, no I/O
to an intermediate data file occurs because a pipe is in memory
creates the loading script.
Load utilities are faster than INSERT statements through the ODBC driver.
The Teradata Server,
Tools and Utilities must be Version 7.0 or later.
If you use TTU 7.0 or
7.1, see “Migration
considerations” on
page 27.
User must create
staging data file.
User must provide the
loading script.
Cannot use Data
Integrator parallel processing
Data Integrator Release Not es 43
Documentation corrections
Bulk loader method
Load utilities Generic named
File Option Advantages Restrictions
pipe — Loads a
large volume of data by writing to a pipe from which Teradata reads.
Load utilities are faster
than INSERT statements through the ODBC driver.
Named pipes are faster
than data files because
As soon as Data
Integrator writes to a pipe, Teradata can read from the pipe.
On Windows, no I/O
to an intermediate data file occurs because a pipe is in memory
User must provide the
loading script.
Cannot use Data
Integrator parallel processing
A job that uses a
generic pipe is not restartable.
44 Data Integrator Release Notes
Documentation corrections
Bulk loader method
Load utilities Named pipe
None (use ODBC)
File Option Advantages Restrictions
access module
— Loads a large volume of data by writing to a pipe from which Teradata reads.
Uses Teradata ODBC driver to send separate SQL INSERT statements to load data.
Load utilities are faster
than INSERT statements through the ODBC driver.
Named pipes should be
faster than data files because:
As soon as Data
Integrator writes to a pipe, Teradata can read from the pipe.
On Windows, no I/O
to an intermediate data file occurs because a pipe is in memory
The job is restartable. For
details, see “Automatically recovering jobs” in the Data
Integrator Designer Guide.
INSERT statements through the ODBC driver are simpler to use than a data file or pipe.
User must provide the
loading script.
Cannot use Data
Integrator parallel processing.
This method does not bulk load data.

How Data Integrator and Teradata use the file options to load

For both Warehouse Builder and Load Utilities methods, you can choose to use either named pipes or staging data files. Choose from the following file options:
Data file
Generic named pipe
Named pipe access module
Data file
Data Integrator runs bulk loading jobs using a staging data file as follows:
Data Integrator Release Not es 45
Documentation corrections
1. Data Integrator generates staging data file(s) containing data to be
loaded into a Teradata table.
2. Data Integrator generates a loading script to be used by Warehouse
Builder. The script defines read and load operators.
Note: If you use If you use non-Warehouse Builder load utilities, you
must provide the loading script.
3. If you use Teradata Warehouse Builder, the read operator reads the
staging data file, then passes the data to the load operator, which loads data into the Teradata table.
Generic named pipe
Data Integrator runs bulk loading jobs using a generic named pipe as follows:
1. Data Integrator generates a script that Teradata W arehouse Builder uses
to load the database.
Note: If you use non-Warehouse Builder load utilities, you must provide
the loading script.
2. Data Integrator creates a pipe to contain the data to load into a Teradata
table. On UNIX, the pipe is a FIFO (first in, first out) file that has name of this
format:
On Windows, the name has this format:
\\.\pipe\datastorename_ownername_tablename_loadernum.dat
3. Data Integrator executes the loading script. If you use Teradata
Warehouse Builder, the script starts Teradata Warehouse Builder and defines read and load operators. If you use one of the non-Warehouse Builder load utilities, you provide the loading script.
4. Data Integrator writes data to the pipes.
5. Teradata Warehouse Builder connects to the pipes. Then the read
operator reads the named pipe and passes the data to the load operator, which loads the data into the Teradata table.
/temp/filename.dat
Named pipe access module
Data Integrator runs bulk loading jobs using a named pipe access module as follows:
1. Data Integrator generates a script that Teradata W arehouse Builder uses
to load the database. The script starts Teradata Warehouse Builder and defines read and load operators.
46 Data Integrator Release Notes
Note: If you use non-Warehouse Builder load utilities, you must provide
the loading script.
2. Teradata (Warehouse Builder or non-Warehouse Builder utility) creates
named pipes to contain the data to load into a Teradata table. On UNIX, the pipe is a FIFO (first in, first out) file that has name of this
format:
/temp/filename.dat
On Windows, the name has this format:
\\.\pipe\datastorename_ownername_tablename_loadernum.dat
3. Data Integrator connects to the pipes and writes data to them.
When Data Integrator tries to connect to the pipes, Teradata Warehouse Builder might not yet have created the pipes. Data Integrator waits one second and retries to connect, up to 30 seconds. If you need to change the 30 second wait time, see the “Migration considerations” section of the Release Notes.
4. The Teradata Warehouse Builder read operator reads the named pipe
and passes the data to the load operator, which loads the data into the Teradata table.

Warehouse Builder method

Data Integrator supports Teradata’s Warehouse Builder, an ETL tool that consolidates bulk loading utilities into a single interface.
When you use the Warehouse Builder method, you can leverage Data Integrator’s powerful parallel processing capabilities to specify a maximum number of files or pipes for Data Integrator to use in processing large quantities of data. For details, see “Parallel processing with Teradata
Warehouse Builder” on page 48.
With Teradata Warehouse Builder, you can choose from four types of load operators. Each operator processes your data in a slightly different way.
Documentation corrections
Operator Description
Load Loads a large amount of data at high speed into an
empty table on the Teradata RDBMS. Use this operator when initially loading tables in the data warehouse.
SQL Inserter Inserts data into a specified existing table on the
Teradata RDBMS. A single SQL session can insert data, while other operators require multiple sessions. Because it uses the fewest RDBMS resources, this is the least intrusive and slowest method of loading data.
Data Integrator Release Not es 47
Documentation corrections
Operator Description
Stream Allows parallel inserts, updates, and deletes to new or
Update Allows highly scalable parallel inserts, updates, and
Attributes are different for each operator selection. Teradata Warehouse Builder predefines attribute default values. Note that:
You can select and modify attribute values.
Some attribute values are optional and some are required. You must
Only the Stream and Update operators provide the “Ignore duplicate
With Teradata Warehouse Builder, you can choose to accept the defaults for the following tbuild options.
Log directory
Debug all tasks
Trace all task s
Latency interval (sec)
Checkpoint interval (sec)
pre-existing Teradata tables. Uses multiple sessions to load data into one or more new or existing tables.
Use this operator to maintain tables in the Teradata RDBMS when the system is busy and can provide minimal resources. Unlike Load and Update, which assembles large volumes of rows into blocks then moves those blocks to Teradata RDBMS, Stream loads data one row at a time.
deletes of vast amounts of data to new or existing Teradata tables. Use this operator to maintain tables in the data warehouse
specify a value for each attribute name shown in bold.
rows” option.
Parallel processing with Teradata Warehouse Builder
Data Integrator provides the option for parallel processing when you bulk load data using the Warehouse Builder method.
Using a combination of choices from the Options tab and Bulk Loader Options tab, you specify the number of data files or named pipes as well as the number of Read and Load Operator Instances. The Number of Loaders option distributes the workload while Read/Load Operators perform parallel processing.
48 Data Integrator Release Notes
Documentation corrections
In the target table Options tab, specify the Number Of Loaders to control the number of data files or named pipes that Data Integrator or Warehouse Builder generates. Data Integrator writes data to these files in batches of 999 rows. For example, if you set Number Of Loaders to 2, Data Integrator would generate two data files, writing 999 rows to the first file, then writing the next 999 rows to the second file. If there are more rows to process, Data Integrator continues, writing to the first file again, then the second, and so forth.
In the Bulk Loader Options tab, specify the number of Read and Load Operator Instances in the loading scripts. If you set Read Operator Instances to 2 and Load Operator Instances to 2, Warehouse Builder will assign the first read operator instance to read one data file and the other instance to read another data file in parallel. The read operator instances then pass the data to the load operator instances for parallel loading into Teradata.
The Warehouse Builder uses a control file to read staging files or pipes and load data.
Note: Product performance during this type of parallel loading depends on a
number of factors such as distribution of incoming data and und erlying DBMS capabilities. Under some circumstances, it is possible that specifying parallel loaders can be detrimental to performance. Always test the parallel loading process before moving to production.
To use Warehouse Builder bulk loading with Data Integrator parallel
processing
1. In your target table Options tab, specify the Number of loaders to control
the number of data files or named pipes. Data Integrator will write data to these files in batches of 999 rows.
2. In the Bulk Loader Options tab, choose Warehouse Builder as your bulk
loader.
3. In File Option, choose the type of file (Data File, Generic named pipe,
Named pipe access module) to contain the data to bulk load.
or
Data Integrator Release Not es 49
Documentation corrections
4. If you chose Data File or Generic named pipe in File Option, specify
the number of read and load instances in the loading scripts. If you set Number of instances to 2 (load operators) and Number of
DataConnector instances to 2 (rea d operators), Warehouse Builder will assign the first read operator instance to read one data file and the other instance to read another data file in parallel. The read operato r inst ances then pass the data to the load operator instances for parallel loading into Teradata.
Note: If you chose Data File, the value you specify for DataConnector
instances (read operators) should be less than or equal to the number of
data files.
5. If you chose Named Pipe Access Module in File Option, specify
Number of instances (load operators) in the loading scripts.
Teradata uses the value you specify in Number of loaders to determine the number of read operator instances, as well as the number of named pipes. The DataConnector instances is not applicable when you use Named Pipe Access Module.
For example, if you set Number of loaders to 2, Warehouse Builder generates two named pipes and assigns one read operator instance to read from one pipe and the other instance to read the other pipe in
50 Data Integrator Release Notes
Documentation corrections
parallel. If you set Number of instances to 2 (load operators), the read operator instances pass the data to the load operator instances for parallel loading into Teradata.
6. If you specified Named pipe access module in File Option, you can
override the default settings for the following Teradata Access Module parameters:
Log Directory
Log Level
Block Size
Fallback Data File Name
Fallback Data Directory Path
Fallback Data File Deletion
Signature Checking
The Teradata Access Module creates a log file to record the load status and writes information to fallback data files. If the job fails, the Teradata Access Module uses the fallback data files to restart the load. The Access Module log file differs from the tbuild log that you specify in the
Log directory option.
Note: Data Integrator sets the bulk loader directory as the default value
for both Log Directory and Fallback Data Directory Path. For more information about these parameters, see the relevant Teradata
tools and utilities documentation.

Load Utilities method

In addition to the Warehouse Builder interface, Data Integrator supports several Teradata utilities that load to and extract from the Teradata database. Each load utility is a separate executable designed to move data into a Teradata database. Choose from the following bulk loader utilities:
Utility Description
MultiLoad Loads large quantities of data into populated tables.
MultiLoad also supports bulk inserts, updates, and deletions against populated tables.
Data Integrator Release Not es 51
Documentation corrections
Utility Description
FastLoad Loads unpopulated tables only. Both the client and
TPump Uses st andard SQL/DML to maintain data in tables. It
Note: To use these utilities, you must write your own scripts for loading the
data; however, Data Integrator generates your data file or writes data to the named pipe.
T o load using a Teradata non-Warehouse Builder load utility
1. In the Bulk Loader Options tab of your target table editor, choose a
Load Utility method.
2. In File Option, choose the type of file (Data File, Generic named pipe,
or Named pipe access module
3. Enter a command to be invoked by Data Integrator in the Command line
text box. For example,
4. If you chose Data File in File Option, enter (or browse to) the directory
path where you want Data Integrator to place your data file.
5. If you chose Generic named pipe or Named pipe access module in
File Option, enter the pipe name.
server environments support FastLoad. Provides a high-performance load (inserts only) to one empty table each session.
also contains a method that you can use to specify the percentage of system resources necessary for operations on tables. Allows background maintenance for insert, delete, and update operations to take place at any time you specify . Used with small data volumes.
) to contain the data to bulk load.
fastload<C:\tera_script\float.ctl
52 Data Integrator Release Notes

Reference Guide

31147:
Chapter 2, Data Integrator Objects “Batch job” entry
Replace the table in the Trace properties section with this table:
Trace Description
Row Writes a message when a transform imports or exports a
Session Writes a message when the job description is read from
Work Flow Writes a message when the work flow description is read
Data Flow Writes a message when the data flow starts, when the
Transform Writes a message when a transform starts, completes, or
Custom Transform
Custom Function Writes a message of all user invocations of the
SQL Functions Writes data retrieved before SQL functions:
Documentation corrections
row.
the repository, when the job is optimized, and when the job runs.
from the repository, when the work flow is optimized, when the work flow runs, and when the work flow ends.
data flow successfully finishes, or when the data flow terminates due to error.
This trace also reports when the bulk loader starts, any bulk loader warnings occur, and when the bulk loader successfully completes.
terminates. Writes a message when a custom transform starts and
completes successfully.
AE_LogMessage function from custom C code.
Every row retrieved by the named query before the
SQL is submitted in the key_generation function
Every row retrieved by the named query before the
SQL is submitted in the lookup function (but only if PRE_LOAD_CACHE is not specified).
When mail is sent using the mail_to function.
Data Integrator Release Not es 53
Documentation corrections
Trace Description
SQL Transforms Writes a message (using the Table_Comparison
SQL Readers Writes the SQL query block that a script, query transform,
SQL Loaders Writes a message when the bulk loader:
Memory Source Writes a message for every row retrieved from the
Memory T arget Writes a message for every row inserted into the memory
transform) about whether a row exists in the target table that corresponds to an input row from the source table.
The trace message occurs before submitting the query against the target and for every row retrieved when the named query is submitted (but only if caching is not turned on).
or SQL function submits to the system. Also writes the SQL results.
Starts
Submits a warning message
Completes successfully
Completes unsuccessfully , if the Clean up bulk loader
directory after load option is selected
Additionally, for Microsoft SQL Server and Sybase ASE, writes when the SQL Server bulk loader:
Completes a successful row submission
Encounters an error
This instance reports all SQL that Data Integrator submits to the target database, including:
When a truncate table command executes if the
Delete data from table before loading option is
selected.
Any parameters included in PRE-LOAD SQL
commands
Before a batch of SQL statements is submitted
When a template table is created (and also dropped,
Drop/Create option is turned on)
if the
When a delete from table command executes if
auto correct is turned on (Informix environment only).
This trace also writes all rows that Data Integrator loads into the target.
memory table.
table.
54 Data Integrator Release Notes
Documentation corrections
Trace Description
Optimized Dataflow
Tables Writes a message when a table is created or dropped.
Scripts and Script Functions
For Business Objects consulting and technical support use.
The message indicates the datastore to which the created table belongs and the SQL statement used to create the table.
Writes a message when Data Integrator runs a script or invokes a script function. Specifically, this trace links a message when:
The script is called. Scripts can be started any level
from the job level down to the data flow level. Additional (and separate) notation is made when a script is called from within another script.
A function is called by the script.
The script successfully completes.
Trace Parallel Execution
Access Server Communication
Writes messages describing how data in a data flow is parallel processed.
Writes messages exchanged between the Access Server and a service provider, including:
The registration message, which tells the Access
Server that the service provider is ready
The request the Access Server sends to the service
to execute
The response from the service to the Access Server
Any request from the Access Server to shut down
Data Integrator Release Not es 55
Documentation corrections
Trace Description
Stored Procedure
Audit Data Writes a message when auditing:
Note: For descriptions of the traces for ABAP Query, IDoc File Reader, RFC
Function, and SAP Table Reader, see the Data Integrator Supplement for SAP.
Writes a message when Data Integrator invokes a stored procedure. This trace reports:
When the stored procedure starts
The SQL query submitted for the stored procedure
call
The value (or values) of the input parameter (or
parameters)
The value (or values) of the output parameter (or
parameters)
The return value (if the stored procedure is a stored
function)
When the stored procedure finishes
Collects a statistic at an audit point
Determines if an audit rule passes or fails
30468:
Chapter 2, Data Integrator Objects Added to books 8/15/05 ~LD “COBOL Copybook file format” entry “Import or Edit COBOL copybook format options” section
After the Data Access tab description, add the following section, which describes the new Field ID tab:
56 Data Integrator Release Notes
Documentation corrections
Field ID
The Field ID tab allows you to create rules for indentifying which records represent which schemas.
Table 1-1 : Field ID tab
Field ID option Description
Use field <FIELD NAME> as ID Select to set a value for the field selected
in the top pane. Clear to not set a value for that field.
Edit Changes the selected value in the V alues
pane to editable text.
Delete Deletes the selected value in the Values
pane.
Insert above Inserts a new value in the Values pane
above the selected value.
Insert below Inserts a new value in the Values pane
below the selected value.
Chapter 2, Data Integrator Objects “Datastore” entry “R/3” options table
Add the following text to the end of the description of the R/3 options table in the Datastore Editor:
You can enter a customized R/3 language in this option. For example, you can type an S for Spanish or an I for Italian.
Chapter 2, Data Integrator Objects “Source” entry “Table source” heading
For the second paragraph after the options table, add Microsoft SQL Server as follows:
In addition to the options shown previously , if the tables use a CDC d atastore, click the CDC Options tab and complete the following to set the Oracle, mainframe, and Microsoft SQL Server changed-data capture options.
In the table following that paragraph, add the following note to the end of the description for the CDC subscription name
option:
Note: You must specify a value for CDC Subscription name.
Data Integrator Release Not es 57
Documentation corrections
Chapter 2, Data Integrator Objects “Target” entry “Target tables” heading “Table 2-27 Informix target table options”
The second paragraph in the description for the Generate files only option (for an Informix target table) indicates the incorrect default bulk loader directory. The correct directory is \$LINK_DIR\log\bulkloader.
31696:
Chapter 2, Data Integrator Objects “Target” entry “Target tables” section
Replace the table “Teradata target table options” with the following table.
Note: The hyperlinks in this table are not functional in this document.
Table 1-2 Teradata target table options
Tab Option Description
Target
Table name Read-only. Database owner Read-only. Datastore name Read-only. Database type Read-only.
Options
Column comparison Choose from comparing by name or comparing by position.
Delete data from table before loading
Number Of Loaders Number of data files or named pipes for parallel loading.
Use overflow file See “Use overflow file” on page 146. Overflow file name The full path is determined by the datastore overflow
Overflow file format See “Use overflow file” on page 146.
This tab lists the table name and other read-only values from the associated datastore. It also contains the option to specify the table as an embedded data flow port (see “Make port” on page 143) and the Database type option (see“Database type” on page 144).
See “Column comparison” on page 145. See “Delete data from table before loading” on page 145.
See “Number of loaders” on page 146.
directory setting. See “Use overflow file” on page 146.
Make port
58 Data Integrator Release Notes
Tab Option Description
Ignore columns with value
Ignore columns with NULL
Use input keys See “Use input keys” on page 147. Auto correct load See “Auto correct load” on page 148. Include in
transaction Transaction order See “Include in transaction” on page 149.
Bulk Loader Options
Bulk loader Select a method to load data. Select from:
See “Ignore columns with value” on page 146
See “Ignore columns with null” on page 147
See “Include in transaction” on page 149.
Warehouse Builder
Load Utilities (non-Teradata Warehouse Builder
methods)
None
Note that your remaining bulk loader options will vary
depending on the method you choose. If you choose File Option Choose the type of file to contain the dat a to bulk load. Select
Warehouse Builder, your options include:
from:
Data File
Generic Named Pipe
Named Pipe Access Module
Operators Warehouse Builder operator values include:
Load
SQL Inserter
Stream
Update
Attribute values A different set of attributes displays depending on the
Warehouse Builder method and operator you choose.
Attribute names in bold indicate that the value cannot be left
blank and requires a value. You can accept default values or
select attribute values and modify them. For more
information on specific attributes, consult your Teradata
Warehouse Builder documentation.
Documentation corrections
Data Integrator Release Not es 59
Documentation corrections
Tab Option Description
Number of instances (load operators)
Available only with Warehouse Builder method. Specify the number of instances for the load operator. This information is included in the Warehouse Builder script that Data Integrator generates.
Number of DataConnector instances
(read operators)
Available only with Warehouse Builder method when File Option is Data File. Specify the number of DataConnector instances for the read operator to read data files generated by Data Integrator. This information is included in the Warehouse Builder script that Data Integrator generates.
Note: When you choose Data File, the value you specify for
DataConnector instances (read operators) should be less
than or equal to the number of data files.
Log directory Available only with Warehouse Builder method. Specify the
tbuild log directory. As a default, Data Integrator automatically populates this field with information provided when the Teradata datastore was created.
Ignore duplicate rows
Available only with Warehouse Builder operators. Corresponds to “Ignore Duplicate Insert Rows” Warehouse Builder statement. When selected, duplicate rows are not placed in the error table. See Teradata Warehouse Builder documentation for more information.
Debug all tasks Available only with Warehouse Builder method. Corresponds
to the tbuild “-d” option. For more information consult your Teradata Warehouse Builder documentation.
Trace all tasks Available only with Warehouse Builder method. Corresponds
to the tbuild “-t” option. For more information consult your Teradata Warehouse Builder documentation.
Latency interval (sec)
Available only with Warehouse Builder method. Corresponds to the tbuild -l option. For more information consult your Teradata Warehouse Builder documentation.
Checkpoint interval (sec)
Available only with Warehouse Builder method. Corresponds to tbuild -z option and specifies a time interval, in seconds, between checkpoints. Data Integrator default for this parameter is10 seconds. For more information consult your Teradata Warehouse Builder documentation.
Stream or Update
60 Data Integrator Release Notes
Tab Option Description
Named pipe parameters
Available only with W arehouse Builder Named pipes access
module file option. You can override the default settings for
the following Teradata Access Module parameters:
Log Directory
Log Level
Block Size
Fallback Data File Name
Fallback Data Directory Path
Fallback Data File Deletion
Signature Checking
The Teradata Access Module creates a log file to record the
load status. The Access Module log file differs from the tbuild
log that you specify in the Log directory option. The
Teradata Access Module writes information to fallback data
files. If the job fails, the Teradata Access Module uses its log
file and fallback data files to restart the load.
Note: Data Integrator sets the bulk loader directory as the
default value for both Log Directory and Fallback Data Directory Path.
For more information about these Access Module
parameters, see the relevant Teradata tools and utilities
documentation. If you choose File Option
Load Utilities, your options include:
Data file
Generic named pipe
Named pipe access module
Command line Load Utilities include:
Mload
FastLoad
TPump
You can enter the command line for the specified utility. For
example for FastLoad, you could enter:
fastload <C:\Teradata\FLScripts\myScript.ctl>
Documentation corrections
Data Integrator Release Not es 61
Documentation corrections
Tab Option Description
Named pipe name For a Load Utility, if you choose either Named Pipes Access
Module or Generic Named Pipes file option, enter the pipe
name. On UNIX, the pipe is a FIFO (first in, first out) file that has
name of this format:
/temp/filename.dat
On Windows, Data Integrator the pipe name has this format:
\\.\pipe\datastorename_ownername_tablename_l
oadernum.dat
62 Data Integrator Release Notes
Tab Option Description
Regardless of which bulk load method you choose, Teradata options also include: Generate files only When selected, Data Integrator generates a data file and a
script file, and ignores the
Options tab). Rather than loading data into the target shown
in the data flow , Dat a Integrator generates a control file and a
data file that you can later load using Teradata bulk loading.
This option is often useful when the target database is
located on a system running a different operating system
than the Data Integrator Job Server.
Data Integrator writes the data and control files in the bulk
loader directory (default value is \$LINK_DIR\log\bulkloader)
specified in the datastore definition. You must copy the files
to the remote system manually. Data Integrator naming
conventions for these files is:
<DatastoreName_OwnerName_TableName_n>.ctl and
<DatastoreName_OwnerName_TableName_n>.dat
where:
OwnerName is the table owner
TableName is the target table
n is a positive integer, optionally used to guarantee that
Data Integrator does not overwrite a pre-existing file
Clean up bulk loader directory after load
Select this check box and Data Integrator deletes all bulk
loader-related files (script, data files, temporary file) after the
load is complete. If an error occurs during bulk load, Data
Integrator does not delete script and data files. Errors usually
occur when:
There is a syntax error in the script.
Error tables are not empty . Error t ables cont ain rows that
cannot be inserted into the target table due to data conversion or constraint violation.
Documentation corrections
Number of loaders option (in the
Data Integrator Release Not es 63
Documentation corrections
Tab Option Description
Mode Specify the mode for loading data in the target table:
append — Adds new records to the table
replace — Deletes all existing records in the table, then
inserts the loaded data as new records
NOTE: Regardless of which bulk loading method you choose:
In append mode, Data Integrator will not truncate the
target table.
In replace mode, Data Integrator will explicitly truncate
the target table prior to executing the Teradata load script. Data Integrator truncates the table by issuing a “truncate table” SQL statement.
When you supply custom loading scripts (FastLoad,
TPump, or MultiLoad), Data Integrator will not parse or modify those scripts.
Field delimiter Specify a single-character field delimiter. Default v alue is /
127 (non-printable character).
Load Triggers Pre Load Commands Post Load Commands
See “Load triggers” on page 150. See “Pre Load Commands” on page 152. See “Pre Load Commands” on page 152.
31297:
Chapter 2, Data Integrator Objects “Template table” entry Change the two references to the option name Drop Create Table to Drop
and re-create table.
24406:
Chapter 5, Transforms “Case” entry
Delete the note:
Note: To enable View Data for a case transform, you must change the name
from Case to another name.
64 Data Integrator Release Notes
Documentation corrections
31317:
Chapter 5, Transforms “Case” entry
For the Case description, change the two references to the option name
Produce output when all expressions are false to Produce default output when all expressions are false.
31446:
Chapter 5, Transforms “Validation” entry
In the To create a validation rule procedure, step 3 (Enable the validation rule) should be first because you cannot perform steps 1b and 2 without enabling validation first. Therefore, replace the steps in that procedure with the following steps:
1. Enable the validation rule. a. Select an input column in the Validation transform editor. b. Select the Enable validation option.
When you enable a validation rule for a column, a check mark appears next to the column in the input schema.
2. Define a condition.
All conditions must be Boolean expressions. The first five options in the editor provide common types of Boolean expression templates. You can also use the Custom condition box with its attached function and smart editors.
3. Define an action on failure.
Choose one of three actions on failure:
Send row to Fail output
Send row to Pass output
Send row to both Pass and Fail outputs
Chapter 6, Functions and Procedures “is_valid_decimal” entry
Add the following explanation and example to the description of the decimal_format parameter of the is_valid_decimal function:
To specify a negative number, use a negative sign . For example, to test if the stock price difference can be converted to decimal format, use the following function:
Data Integrator Release Not es 65
Documentation corrections
is_valid_decimal (Stocks.Price_difference, '-###.##')
28497:
Chapter 6, Functions and Procedures “lookup” entry
Add the following note to the lookup function section after the first paragraph:
Note: You cannot use this function with a J. D. Edwards datastore. Use the
lookup_ext function instead.
31259:
Chapter 6, Functions and Procedures “min” entry
Change the last phrase (at the end of the second bullet) from “...Data Integrator calculates the maximum salary” to “...Data Integrator calculates the minimum salary.”
31310:
Chapter 6, Functions and Procedures “to_char” entry
Change two references of “HR24” to “HH24.”

Supplement for J.D. Edwards

28497: Chapter 1, J.D. Edwards interface “Functions” section
Add the following note to the end of the Functions section:
Note: You cannot use the lookup function with a J. D. Edwards datastore.
Use the lookup_ext function instead.

Supplement for SAP

28775: Chapter 3, SAP Datastores “Defining SAP R/3 datastores” section
Replace steps 5, 6, and 7 in the procedure “To define an SAP R/3 datastore” with the following steps and new subsection:
66 Data Integrator Release Notes
Documentation corrections
5. Enter the appropriate R/3 application server, User name, and
Password information.
As a development option, you can connect to an application server from your GUI using a routing string using the specified syntax in the R/3 application server field to ensure connection.
The syntax for using an SAP routing string is:
/H/IP Address of local SAP router/H/IP Address of target
SAP router/H/IP Address of target application server
For example: Your SAP routing string (local and target) is /H/10.10.1.7/H/204.79.199.5
Your application server IP address is 147.204.76.41 Therefore, your routing string would look like this: /H/10.10.1.7/H/
204.79.199.5/H/147.204.76.41
6. You can either save the datastore or add more information to it:
To save the datastore and close the Datastore Editor, click OK.
To add more information, click Advanced.
To enter values for each configuration option, click the cells under each configuration name. See “Typical Advanced options to change” on page 2.
See “Datastore” on page 54 of the Data Integrator Reference Guide for a description of the options in the grid for different databases.
7. Click OK to save the datastore.
Typical Advanced options to change
This section describes some Advanced options that you might want to change when you first create your SAP R/3 datastore.
Data Integrator Release Not es 67
Documentation corrections
R/3 language—If you want to choose a language that is not listed in the
drop-down list for R/3 language, type in the letter defined in your R/3 application. For example, you can type an S for S panish or an I for Italian.
68 Data Integrator Release Notes
Documentation corrections
For initial development, use the execution and transfer method options
listed in the following table.
Option Description
ABAP execution option Generate and Execute
The ABAP created by the Data Integrator job resides on the same computer as the Data Integrator Job Server. It is submitted to R/3 using SAP’s RFC_ABAP_INSTALL_AND_RUN.
Data transfer method Shared directory
Both Data Integrator and SAP R/3 have direct access to the directory where data is stored. This method works best when both systems are on Windows NT and security is not an issue.
Working directory on SAP server
Data Integrator path to the shared directory
Generated ABAP directory
A directory on the SAP R/3 application server where Data Integrator can write intermediate files. For example:
C:\temp\bodi\ds_development The path to the working directory as
accessible through the computer on which the Data Integrator Job Server is running. This path may indicate a mapped network drive. For example:
V:\temp\bodi\ds_development A directory into which Data Integrator
writes ABAP files. The path is relative to the Data Integrator Job Server. For example:
C:\bodi\ABAP_programs
31319:
Chapter 8, Executing Batch Jobs that Contain R/3 Data Flows “Data transport methods” section
In the graphics, change a total of six references (four in the graphics and two in the “Shared directory requirements” section) of “Data Server” to “Job Server.”
Data Integrator Release Not es 69

Business Objects information resources

Release Summary

Delete the section “SAP Unicode.”
Business Objects information resources
Business Objects offers a full documentation set covering its products and their deployment. Additional support and services are also available to help maximize the return on your business intelligence investment. The following sections detail where to get Business Objects document ation and how to use the resources at Business Objects to meet your needs for technical support, education, and consulting.

Documentation

View or download the Business Objects Product Document Map, available with the product documentation at http://support.businessobjects.com/
documentation/.
The Product Document Map references all Business Objects guides and lets you see at a glance what information is available, from where, and in what format.
You can access electronic documentation at any time from the product interface, the web, or from your product CD.

Documentation on the web

The full electronic documentation set is available to customers on the web from the Customer Support web site at: http://support.businessobjects.com/
documentation/.

Documentation on the product CD

Look in the Doc directory of your product CD or installation directory for guides in Adobe Acrobat PDF format.

Send us your feedback

Do you have a suggestion on how we can improve our documentation? Is there something you particularly like or have found useful? Drop us a line, and we will do our best to ensure that your suggestion is included in the next release of our documentation: documentation@businessobjects.com.
70 Data Integrator Release Notes
Business Objects information resources
Note: If your issue concerns a Business Objects product and not the
documentation, please contact our Customer Support experts at http://
support.businessobjects.com.

Customer support, consulting, and training

A global network of Business Objects technology experts provides customer support, education, and consulting to ensure maximum business intelligence benefit to your business.

Data Integrator (DI) Zone

DI Zone provides technical information such as white papers, tips and tricks, and listings of training opportunities and other events. This site is available to all Data Integrator users.
http://www.businessobjects.com/products/dev_zone/di/default.asp

Customer support centers

Business Objects offers customer support plans to best suit the size and requirements of your deployment. We operate customer support centers in the following countries:
USA
Australia
Canada
United Kingdom
Japan

Online Customer Support

The Business Objects Customer Support web site contains information about Customer Support programs and services. It also has links to a wide range of technical information including Knowledge Base articles, downloads, and support forums.
http://support.businessobjects.com/

Consultation

Business Objects consultants can accompany you from the initial analysis stage to the delivery of your deployment project. Expertise is available in relational and multidimensional databases, connectivities, database design tools, customized embedding technology, and more.
Data Integrator Release Not es 71
Business Objects information resources
For more information, contact your local sales office, or contact us at:
http://www.businessobjects.com/services/consulting/

Training

From traditional classroom learning to targeted e-learning seminars, we can offer a training package to suit your learning needs and preferred learning style. Find more information on the Business Objects Education web site:
http://www.businessobjects.com/services/training

Useful addresses at a glance

Address Content
Business Objects product information
http://www.businessobjects.com
Product documentation
http://support.businessobjects.com/ documentation
Business Objects Documentation mailbox
documentation@businessobjects.com
Online Customer Support
http://support.businessobjects.com
Business Objects Consulting Services
http://www.businessobjects.com/ services/consulting/
Business Objects Education Services
http://www.businessobjects.com/ services/training
Information about the full range of Business Objects products.
Business Objects product documentation, including the Business Objects Product Document Map.
Send us feedback or questions about documentation.
Information on Customer Support programs, as well as links to technical articles, downloads, and online forums.
Information on how Business Objects can help maximize your business intelligence investment.
Information on Business Objects training options and modules.
72 Data Integrator Release Notes

Copyright information

SNMP copyright information

Portions Copyright 1989, 1991, 1992 by Carnegie Mellon University Portions Derivative Work - 1996, 1998-2000 Portions Copyright 1996, 1998-2000 The Regents of the University of
California All Rights Reserved Permission to use, copy, modify and distribute this software and its
documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of CMU and The Regents of the University of California not be used in advertising or publicity pertaining to distribution of the software without specific written permission.
CMU AND THE REGENTS OF THE UNIVERSITY OF CALIFORNIA DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL CMU OR THE REGENTS OF THE UNIVERSITY OF CALIFORNIA BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM THE LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Portions Copyright (c) 2001, Networks Associates Technology, Inc All rights reserved. Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the NAI Labs nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
Copyright information
Data Integrator Release Not es 73
Copyright information
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANT ABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Portions of this code are copyright (c) 2001, Cambridge Broadband Ltd. All rights reserved. Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list
of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
The name of Cambridge Broadband Ltd. may not be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER “AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANT ABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
74 Data Integrator Release Notes
Specifications subject to change without notice. Not responsible for errors or admissions.

IBM ICU copyright information

Copyright (c) 1995-2003 International Business Machines Corporation and others
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, provided that the above copyright notice(s) and this permission notice appear in all copies of the Software and that both the above copyright notice(s) and this permission notice appear in supporting documentation.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Copyright information
Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization of the copyright holder.
Data Integrator Release Not es 75
Copyright information
76 Data Integrator Release Notes
Loading...