Business Objects products in this release may contain redistributions of software
licensed from third-party contributors. Some of these individual components may
also be available under alternative licenses. A partial listing of third-party
contributors that have requested or permitted acknowledgments, as well as required
notices, can be found at: http://www.businessobjects.com/thirdparty
IBM ICU copyright information...................................................................92
BusinessObjects Data Integrator Release Notes5
Contents
6BusinessObjects Data Integrator Release Notes
About these Release Notes
1
About these Release Notes
1
Welcome to BusinessObjects Data Integrator XI Release 2 (XI R2)
Accelerated version 11.7.3.0. This version is a functional release that includes
the following new features:
•HP Neoview database support
•Inxight integration for reading unstructured text
•Job server enhancements that now allow up to 50 Designer clients with
no compromise in response time
See Documentation updates on page 37 for descriptions of how to use these
new features.
Please read this entire document before installing your Business Objects
software. It contains important information about this product release including
installation notes, details regarding the latest resolved and known issues,
and important information for existing customers.
To obtain the latest version of Data Integrator documentation including the
most up-to-date version of these Release Notes, visit the Customer Support
documentation download site (http://support.businessobjects.com/documen
tation/) and follow the appropriate product guide links.
8BusinessObjects Data Integrator Release Notes
Supported platforms and
versions
2
Supported platforms and versions
2
Compatibility update
This section describes changes in compatibility between Data Integrator and
other applications. It also summarizes limitations associated with this version
of Data Integrator.
For complete compatibility and availability details, see the Supported
Platforms documentation on the Business Objects Customer Assurance Web
site at:
http://support.businessobjects.com/documentation
Compatibility update
This section describes:
Supported products on page 10
•
Unsupported products on page 15
•
For complete compatibility and availability details, see the Supported
Platforms documentation on the Business Objects Customer Assurance Web
site at:
http://support.businessobjects.com/documentation
Supported products
Data Integrator now supports the following products as of this release.
•SAP BI 7
•Data Quality 11.7 and 11.7.1
•Microsoft Internet Explorer 7
•Sybase ASE 12.5 on Solaris and AIX
•HP Neoview 2.2 native support on Windows and Linux
•IBM DB2 9.1
•Netezza 3.1.4
•Oracle E-Business Suite Release 12
•Sybase IQ 12.7 (source and target)
•Teradata V2R6.2 on Windows and Linux
•Impact analysis will interoperate with BusinessObjects XI R2 SP3
universes, Web Intelligence documents, Desktop Intelligence documents,
and Business Views
10BusinessObjects Data Integrator Release Notes
Supported platforms and versions
Compatibility update
•Citrix Presentation Server version 4.0. For more information, see
documentation for Citrix support on the Business Objects developer
community Web site at http://diamond.businessobjects.com/EIM/DataIn
tegrator.
•BusinessObjects Text Analysis for Data Integrator version 11.5. Check
the the Business Objects Customer Assurance Web site at http://sup
port.businessobjects.com/documentation/ for the availability of
BusinessObjects Text Analysis for Data Integrator version 11.5.
Data Integrator also supports the following products as of version 11.7.
•Support is available for the following platforms and configurations:
•DataDirect ODBC Driver version 5.2
•Data Federator XI R2 version 11.5
•Data Federator XI R2 Accelerated versions 11.6.2 and 11.7.0
•Salesforce.com AppExchange API for Enterprise version 7.0
•.NET support for Web services
•SAP ECC 6.0 via ABAP, BAPI and IDOC
Please note that this support is only available on the latest versions
of Rapid Marts. Check your Rapid Marts documentation to get the
latest support information for that product.
2
•Excel version 97, XP, 2000 and 2003 as a source (please review the
Limitations section for Excel support on UNIX)
•Impact analysis interoperates with:
•BusinessObjects universes 6.5.1, XI, and XI R2 SP2
•Web Intelligence documents versions 6.5.1, XI, and XI R2 SP2
•Desktop Intelligence documents versions 6.5.1, XI, and XI R2 SP2
•Business Views XI and XI R2 SP2
•The scheduling functionality is supported with BusinessObjects version
XI R2 or later versions of the BusinessObjects Enterprise scheduler
•The following Universe Builder versions are compatible with Data
Integrator 11.7:
•The stand-alone Universe Builder version 11.5 bundled with Data
Integrator 11.7. This stand-alone version of Universe Builder will
interoperate with Business Objects version 6.5.1 and XI. Find Universe
Builder documentation in the "Building Universes from Data Integrator
metadata Sources" chapter of the Universe Designer Guide, which
you can download from the Business Objects Customer Assurance
Web site at http://support.businessobjects.com/documentation/. Select
BusinessObjects Data Integrator Release Notes11
Supported platforms and versions
2
Compatibility update
•The Universe Builder version bundled with BusinessObjects Enterprise
•ODBC to generic database support has been validated with the following
database servers and ODBC drivers. For complete compatibility details,
see the Business Objects Supported Platforms documentation on the
Business Objects Customer Assurance Web site at: http://support.busi
nessobjects.com/documentation
•Microsoft SQL Server 2000 via DataDirect Connect for ODBC 5.2.
•MySQL version 5.0 via ODBC driver version 3.51.12 on Windows and
•Red Brick version 6.3 via ODBC driver version IBM 6.3
•SQLAnywhere version 9.0.1 via ODBC driver version Adaptive Server
•Sybase IQ version 12.6 requires ESD 4
•Log-based changed-data capture (CDC) works with the following database
versions
•Oracle version 9.2 and above compatible versions for synchronous
BusinessObjects Enterprise as the product and Universe Designer
Guide as the document.
XI R2, XI R2 SP1, XI R2 SP2, and XI R2 SP3 (see the required
BusinessObjects Enterprise XI R2 SP1 CHF-13 patch information in
the Resolved issues section of this document for problem report
ADAPT00534412).
MySQL version 4.1 via ODBC driver version 3.51.10 on Windows and
UNIX.
Note: Driver version 3.51.12 is required in the multibyte environment.
Anywhere 9.0
CDC and Oracle version 10G and above compatible version for
asynchronous CDC
Note: Changed-data capture for Oracle 10G R2 does not support
multibyte table and column names due to a limitation in Oracle.
•Microsoft SQL Server 2000 and 2005
•IBM DB2 UDB for Windows version 8.2 using DB2 Information
Integrator for Replication Edition version 8.2 (DB2 II Replication) and
IBM WebSphere Message Queue version 5.3
•IBM DB2 UDB for z/OS using DB2 Information Integrator Event
Publisher for DB2 UDB for z/OS and IBM WebSphere Message Queue
version 5.3.1
•IBM IMS/DB using DB2 Information Integrator Classic Event Publisher
for IMS and IBM WebSphere Message Queue version 5.3.1
12BusinessObjects Data Integrator Release Notes
Supported platforms and versions
Compatibility update
•IBM VSAM under CICS using DB2 Information Integrator Classic Event
Publisher for VSAM and IBM WebSphere Message Queue version
5.3.1
•Attunity for mainframe sources using Attunity Connect version 4.6.1
•NCR Teradata with the following server versions, client versions, and
ODBC drivers:
ODBC driverClient versionTeradata server version
03.06.00TTU 8.2V2R6.2
03.05.00TTU 8.1V2R6.1
03.04.00TTU 8.0V2R6
03.03.00TTU 7.1V2R5.1
This support requires the following TTU patches:
•On Windows platforms:
•cliv2.04.08.00.01.exe
•tbld5000.05.00.00.01.exe
•psel5000.05.00.00.01.exe
•pdtc5000.05.00.00.01.exe
•npaxsmod.01.03.01.00.exe
•npaxsmod.01.03.00.02.exe
•npaxsmod.01.03.00.01.exe
•On Linux platforms:
•cliv2.04.08.00.01.exe
•mqaxsmod.01.03.00.00.exe
•npaxsmod.01.03.01.00.exe
•npaxsmod.01.03.00.01.exe
2
With Teradata V2R6, the named pipes mechanism for Teradata bulk
loader is supported.
Note: Teradata does not recommend using Teradata Parallel Transporter
in TTU 7.0 and 7.1.
BusinessObjects Data Integrator Release Notes13
Supported platforms and versions
2
Compatibility update
•Data Integrator uses Apache Axis version 1.1 for its Web Services support.
Axis version 1.1 provides support for WSDL version 1.1 and SOAP version
1.1 with the exception of the Axis servlet engine.
•Data Integrator connectivity to Informix servers is now only supported via
the native Informix ODBC driver. Version 2.90 or higher compatible version
of the Informix client SDK is required.
•Designers on Japanese operating systems working with Job Servers on
Japanese Windows computers.
When using the Designer on a Japanese operating system and working
with either an English Job Server on UNIX or a Job Server on Windows
on a non-Japanese operating system:
•Messages from the back end display in English
•Business Objects recommends using a Japanese Administrator on
•Java Virtual Machine—Version compatibility with Data Federator and
Data Integrator:
Windows so that metadata Reports and Impact Analysis correctly
display object descriptions in Japanese. If you use a Japanese
Administrator on UNIX, "junk" characters display if any of the object
descriptions have Japanese characters.
To access Data Federator data sources with Data Integrator, you must
modify your OpenAccess environment to use the Java Virtual Machine
that ships with Data Integrator. Data Federator installs a component called
OpenAccess ODBC to JDBC Bridge that requires these modifications.
Please make the following modifications (where Data Federator is installed
in D:\Program Files\DataFederatorXIR2, Data Integrator is installed in
D:\Program Files\DataIntegrator, and OpenAccess is installed in
D:\Program Files\OaJdbcBridge):
1. In the OpenAccess configuration file (D:\Program
Files\OaJdbcBridge\bin\iwinnt\openrda.ini), the path for the JVM must
be changed from:
unknown data types for SQL Server, Oracle, Teradata, ODBC, DB2,
Informix, Sybase ASE, and Sybase IQ assuming these database
servers can convert from VARCHAR to the native (unknown) data
type and from the native (unknown) data type to VARCHAR.
CLOB column (for example, bulk loading or auto-correct load could
fail).
Note: Use the VARCHAR column in the physical schema for loading.
•PeopleSoft 8 support is implemented for Oracle only.
Data Integrator jobs that ran against previous versions of PeopleSoft are
not guaranteed to work with PeopleSoft 8. You must update the jobs to
reflect metadata or schema differences between PeopleSoft 8 and
previous versions.
•Stored procedure support is implemented for DB2, Oracle, Microsoft SQL
Server, Sybase ASE, Sybase IQ, and ODBC only.
•Teradata support is only implemented for Windows and Linux.
•On Teradata, the named pipe implementation for Teradata Parallel
Transporter is supported with Teradata Tools and Utilities version 8 or
later compatible version. Teradata Tools and Utilities version 7.0 and 7.1
are not supported with named pipes.
•Bulk loading data to DB2 databases running on AS/400 or MVS systems
is not supported.
16BusinessObjects Data Integrator Release Notes
Supported platforms and versions
Limitations update
•Data Integrator Management Console can be used on Microsoft Internet
Explorer version 6.0 SP1, 6.0 SP2, or 7.0 only. Earlier browser versions
may not support all of the Administrator functionality.
•Data Integrator's View Data feature is not supported for SAP R/3 IDocs.
For SAP R/3 and PeopleSoft, the Table Profile tab and Column Profile
tab options are not supported for hierarchies.
•Data Integrator now supports multibyte metadata for table names, column
names, file names, and file paths. The following table lists which sources
support multibyte and single-byte metadata and which support single-byte
only:
2
Multibyte and single-byte metadata
supported*
BusinessObjects Enterprise
Data Federator
DB2
Informix
MySQL
ODBC
Oracle
Siebel
Microsoft SQL Server
Sybase ASE
Sybase IQ
Teradata
XML
Single-byte metadata supported
Attunity connector for mainframe
databases
Data Quality
HP Neoview
JD Edwards
Netezza
Oracle Applications
PeopleTools
SAP R/3
SAP BW Server
* Support for multibyte metadata is dependent on comparable support in
the applications, databases, and technologies with which Data Integrator
interoperates.
•Support for the Data Profiler is provided for sources as follows:
BusinessObjects Data Integrator Release Notes17
Supported platforms and versions
2
Limitations update
Not supportedSupported
Attunity Connector for mainframe
databases
DB2
Data Federator
Flat file
HP Neoview
Informix
Microsoft SQL Server
MySQL
Netezza
ODBC
Oracle
Oracle Applications
PeopleSoft
SAP R/3
Siebel
Sybase ASE
Sybase IQ
Teradata
COBOL copybooks
Excel
IDOC
JDE
Memory Datastore
SAP BW
XML
•Support for the LOOKUP_EXT function is provided for sources as follows:
18BusinessObjects Data Integrator Release Notes
Supported platforms and versions
Limitations update
Not supportedSupported
2
DB2
Data Federator
Flat file
JDE
HP Neoview
Memory Datastore
Microsoft SQL Server
MySQL
Netezza
ODBC
Oracle
Oracle Applications
PeopleSoft
Siebel
Sybase ASE
Sybase IQ
Teradata
COBOL copybook
Excel
IDOC
SAP BW
SAP R/3
XML
•DB2 Java Library limitations
All Web applications in the Management Console will not work with a DB2
repository under any of the following conditions:
•db2java library is incompatible with DB2 client. For example, DB2
Client is version 8.1 and db2java library is version 8.2, or
•db2java library is not generated for JDBC 2 driver, or
•the java class path variable is not properly set to the corresponding
java library, or
•the DB2 JDBC shared library (e.g. libdb2jdbc.so on AIX) is not
compatible with the Java JDBC 2 driver
Under these conditions, you might see the following behavior:
•The Administrator stops working or crashes after you configure the
DB2 repository. Find any error and warning messages related to the
DB2 repository configuration in the log file.
BusinessObjects Data Integrator Release Notes19
Supported platforms and versions
2
Limitations update
•When testing your DB2 connection from the Administrator, the following
In this release, the db2java JDBC 2 driver version of DB2 8.1 and DB2
8.2 are provided. On Windows, find the different version of these libraries
LINK_DIR/ext/lib
•db2java.zip (by default) DB2 8.2 version
•db2java_8.1.zip DB2 8.1 version
By default, the db2java.zip of version 8.2 will be used. If you run with a
DB2 8.1 Client, you must:
•replace with version 8.1 of db2java.zip
•make sure the compatible DB2 JDBC shared library is in the shared
•restart the Web server service on Windows or restart the job service
If you run with other DB2 Client versions, you must obtain the
corresponding java library with the JDBC 2 driver. Please refer to IBM
DB2 documentation for how to obtain the correct java library.
errors appear in the Administrator log:
•BODI-3016409: fail to connect to repository.
•BODI-3013014: didn't load DB2 database driver.
library path
on UNIX
•Informix native-driver support on UNIX requires a fix in the Informix Client
SDK for IBM bug number 167964. Please contact IBM for a patch that
includes this bug fix.
Note: IBM is targeting inclusion of this bug fix in Client SDK version 2.90.
•Data Integrator supports files greater than 2 GB only in the following
cases:
•Reading and loading large data files of type delimited and positional.
•Generating large files for bulk loader staging files for subsequent bulk
loading by a native bulk loader utility (such as SQL Loader).
•Previewing data files from the file format page in Designer when the
large files are on a UNIX Job Server.
•Reading COBOL copybook data files.
All other files generated by Data Integrator, such as log files and
configuration files, are limited to 2 GB.
•For the Microsoft Excel as a data source feature, the following limitations
apply:
20BusinessObjects Data Integrator Release Notes
Supported platforms and versions
Limitations update
Concurrent access to the same Excel file might not work. For example,
•
View Data might not display if the file is currently open in Excel.
•Because an Excel column can contain mixed data types, some data
type conversions could produce unexpected results; for example,
dates might convert to integers.
•Boolean formulas not supported.
•Workbooks with AutoFilter applied are not supported. Remove the
filter before importing the workbook.
•Workbooks with hidden rows and/or columns are not supported.
•Stored procedures for MySQL are not supported due to limitations in
the MySQL ODBC library.
•The Data Integrator Salesforce.com adapter can connect to the
Salesforce.com version 7 API; however, the new functionality added in
API version 7 and above is not supported. The custom lookup field is the
only exception (new in API version 7) and is supported in the
Salesforce.com adapter. To ensure that your existing Salesforce.com
adapter works properly, you must update the URL in the datastore so
that it points to Salesforce.com 7.0 API, the new URL should be:
https://www.salesforce.com/services/Soap/u/7.0.
2
BusinessObjects Data Integrator Release Notes21
Supported platforms and versions
Limitations update
2
22BusinessObjects Data Integrator Release Notes
Migration considerations
3
Migration considerations
3
Logs in the Designer
Note: To use this version of Data Integrator, upgrade all existing Data
Integrator repositories to version 11.7.0.0.
This section lists and briefly describes all migration-specific behavior changes
associated with this version of Data Integrator. To view migration-specific
behavior changes among previous releases of Data Integrator, see the DataIntegrator Migration Behavior Changes guide available from the Business
Objects Customer Support documentation Web site (http://support.busines
sobjects.com/documentation/).
Logs in the Designer
In Data Integrator 11.7.3, you will only see the logs (trace, error, monitor) for
jobs that started from the Designer, not for jobs started via other methods
(command line, real-time, scheduled jobs, or Web services). To access these
other log files, use the Administrator in the Data Integrator Management
Console.
Data flow cache type
When upgrading your repository from versions earlier than 11.7 to an 11.7
repository using version 11.7.3.0, all of the data flows will have a default
Cache type value of pageable. This is different from the behavior in 11.7.2.0,
where the upgraded data flows have a default Cache type value of
in-memory.
Pageable cache for memory-intensive data
flows
As a result of multibyte metadata support, Data Integrator might consume
more memory when processing and running jobs. If the memory consumption
of some of your jobs were running near the 2-gigabyte virtual memory limit
in a prior version, there is a chance that the same jobs could run out of virtual
memory. If your jobs run out of memory, take the following actions:
Set the data flow Cache type value to pageable.
•
•Specify a pageable cache directory that:
24BusinessObjects Data Integrator Release Notes
Contains enough disk space for your data. To estimate the amount of
•
space required for pageable cache, consider factors such as:
•Number of concurrently running jobs or data flows
•Amount of pageable cache required for each concurrent data flow
•Exists on a separate disk or file system from the Data Integrator system
and operating system (such as the C: drive on Windows or the root
file system on UNIX).
Embedded data flows
In this version of Data Integrator, you cannot create embedded data flows
which have both an input port and an output port. You can create a new
embedded data flow only at the beginning or at the end of a data flow with
at most one port, which can be either an input or an output port.
However, after upgrading to Data Integrator version 11.7.2 or later, embedded
data flows created in previous versions will continue to run.
Migration considerations
Embedded data flows
3
Oracle repository upgrade
If you previously upgraded your repository to Data Integrator 11.7.0.0 and
you open the Object State Report on the Central repository from the
Administrator, you might see the error ORA04063 view ALVW_OBJ_CINOUT
has errors. This occurs if you had an Oracle central repository prior to
version 11.7.0.0 and you upgraded the central repository to 11.7.0.0.
Note: If you upgraded to 11.7.0.0 from a prior version of Data Integrator and
you are now upgrading to version 11.7.3 (this release), this issue might occur
and you must follow the instructions below. Alternatively, if you upgraded
from a version prior to 11.7.0.0 to 11.7.3 without upgrading to version 11.7.0.0,
this issue will not occur because it has been fixed in 11.7.2.0.
To fix this error, you must manually drop and recreate the view
ALVW_OBJ_CINOUT using an Oracle SQL editor, such as SQLPlus.
Use the following SQL statements to perform the upgrade:
DROP VIEW ALVW_OBJ_CINOUT;
CREATE VIEW ALVW_OBJ_CINOUT (OBJECT_TYPE, NAME, TYPE, NORMNAME,
Data Integrator 11.7.3 on Solaris and AIX platforms is a 64-bit application
and requires 64-bit versions of the middleware client software (such as Oracle
and SAP) for effective connectivity. If you are upgrading to Data Integrator
11.7.3 from a previous version, you must also upgrade all associated
middleware client software to the 64-bit version of that client. You must also
update all library paths to ensure that Data Integrator uses the correct 64-bit
library paths.
26BusinessObjects Data Integrator Release Notes
Data quality
Data Integrator now integrates the BusinessObjects Data Quality XI (formerly
known as Data Cleansing) application for your data quality needs, which
replaces Firstlogic's RAPID technology.
Note the following changes to data cleansing in Data Integrator:
Depending on the Firstlogic products you owned, you previously had up
•
to three separate transforms that represented data quality functionality:
Address_Enhancement, Match_Merge, and Name_Parsing.
Now, the data quality process takes place through a Data Quality Project.
To upgrade existing data cleansing data flows in Data Integrator, replace
each of the cleansing transforms with an imported Data Quality Project
using the Designer.
You will need to identify all of the data flows that contain any data
cleansing transforms and replace them with a new Data Quality Project
that connects to a Data Quality blueprint or custom project.
Data Quality includes many example blueprints that are sample projects
•
that can serve as a starting point when creating your own customized
projects. (You may need to modify the blueprints to work with your
installation. Please see your Data Quality Documentation, including the
Release Notes, for updated blueprint information.) If none of these
blueprints work to your satisfaction, you can either save these blueprints
as a project and edit them, or you can create a project from scratch.
Migration considerations
Data quality
3
You must use the Project Architect, Data Quality's graphical user interface,
•
to edit projects or create new ones. Business Objects strongly
recommends that you do not attempt to manually edit the XML of a project
or blueprint.
Each imported Data Quality project in Data Integrator represents a
•
reference to a project or blueprint on the data quality server. The Data
Integrator Data Quality projects allow field mapping.
BusinessObjects Data Integrator Release Notes27
Migration considerations
3
To migrate your data flow to use the new Data Quality transforms
To migrate your data flow to use the new
Data Quality transforms
1. Install Data Quality XI, configure the server, and make sure it is started
before you can use it in Data Integrator. Please refer to the Data Quality
XI documentation for installation instructions.
2. In the Data Integrator Designer, create a new datastore of type Business
Objects Data Quality and connect to your Data Quality server.
3. Import the Data Quality projects that represent the data quality
transformations you want to use. Each project will appear as a Data
Quality project in your datastore. For the most common data quality
transformations, you can use existing blueprints (sample projects) in the
Data Quality repository.
4. Replace each occurrence of the old data cleansing transforms in your
data flows with one of the imported Data Quality transforms. You will also
need to reconnect the input and output schemas with the sources and
targets used in the data flow.
When opening a data flow containing one of the old data cleansing transforms
(address_enhancement, name_parsing, match_merge), you will still be able
to see the old transforms in this release (although they are not available
anymore in the object library). You can even open the properties and see
the details for each transform.
When validating a data flow that uses one of the old data cleansing
transforms, you will get an error such as:
[Custom Transform:Address_Enhancement] BODI-1116074: First
Logic support is obsolete. Please use the new Data Quality
feature.
It is not possible to execute a job that contains data flows using the old data
cleansing transforms (you will get the same error).
Contact Business Objects Customer Support at http://support.businessob
jects.com/ if you need help migrating your data cleansing data flows to the
new Data Quality transforms.
Note: Data Integrator 11.7.3 allows you to reconcile metadata for Data
Quality datastores.
28BusinessObjects Data Integrator Release Notes
Distributed data flows
After upgrading to this version of Data Integrator, existing jobs have the
following default values and behaviors:
Job Distribution level: Job
•
All data flows within a job will be run on the same job server.
The default for Collect statistics for optimization and Collect statistics
•
for monitoring is cleared.
The default for Use collected statistics is selected.
•
Since no statistics are initially collected, Data Integrator will not initially
use statistics.
Every data flow runs as a process (not as a sub data flow process).
•
New jobs and data flows you create using this version of Data Integrator
have the following default values and behaviors:
Job Distribution level: Job
•
The Cache type for all data flows: pageable.
•
The default for Collect statistics for optimization and Collect statistics
•
for monitoring is cleared.
Migration considerations
Distributed data flows
3
The default for Use collected statistics is selected.
•
If you want Data Integrator to use statistics, you must collect statistics for
optimization first.
Every data flow is run as a single process. To run a data flow as multiple
•
sub data flow processes, you must use the Data_Transfer transform or
select the Run as a separate process option in transforms or functions.
All temporary cache files are created under LINK_DIR/log/PCache
•
directory. This option can be changed from the Server Manager.
XML Schema enhancement
Data Integrator 11.7 adds the new Include schema location option for XML
target objects. This option is selected by default.
BusinessObjects Data Integrator Release Notes29
Migration considerations
3
Password management
In the Designer option Tools > Options > Job Server > General, Data
Integrator 11.5.2 provided the key XML_Namespace_No_SchemaLocation
for the section AL_Engine. The default value, FALSE, indicates that the
schema location is included. If you upgrade from 11.5.2 and had set
XML_Namespace_No_SchemaLocation to TRUE (indicates that the schema
location is NOT included), you must open the XML target in all data flows
and clear the Include schema location option to keep the old behavior for
your XML target objects.
Password management
All password fields are encrypted using two-fish algorithm starting in this
•
release of Data Integrator.
To simplify the process of updating new passwords for the repository
•
database, this version of Data Integrator introduces a new password file
feature. If you have no requirement to change the password to the
database hosting your repository, you may not need to use this optional
feature.
If you must change the password (for example, security requirements
stipulate that you must change your password every 90 days), then
Business Objects recommends that you migrate your scheduled or
external job command files to use this feature.
Migration requires that you regenerate every job command file to use the
password file. After migrating, when you update the repository password,
you need only regenerate the password file. If you do not migrate using
the password file feature, then you must regenerate every job command
file every time you change the associated password.
Web applications
The Data Integrator Administrator (formerly called the Web Administrator)
•
and metadata Reports interfaces have been combined into the new
Management Console in Data Integrator 11.7. Now, you can start any
Data Integrator Web application from the Management Console launch
pad (home page). If you have created a bookmark or favorite that points
to the previous Administrator URL, you must update the bookmark to
point to http:// computername : port /diAdmin.
30BusinessObjects Data Integrator Release Notes
If in a previous version of Data Integrator you generated WSDL for Web
•
service calls, you must regenerate the WSDL because the URL to the
Administrator has been changed in Data Integrator 11.7.
Web services
Data Integrator is now using Xerces2 library. When upgrading to 11.7 or
above and configuring the Web Services adapter to use the xsdPath
parameter in the Web Service configuration file, delete the old Web Services
adapter and create a new one. It is no longer necessary to configure the
xsdPath parameter.
Migration considerations
Web services
3
BusinessObjects Data Integrator Release Notes31
Migration considerations
Web services
3
32BusinessObjects Data Integrator Release Notes
Deployment on UNIX
4
Deployment on UNIX
4
Please observe the following requirements for UNIX systems.
Install JDK version 1.4.2 as described in the vendor's documentation. (If
•
the Daylight Savings Time update affects you, install the JDK patch,
version 1.4.2_13 or later, 1.4.x compatible version).
The AIX maintenance level must be at least 5200-05. For all versions of
•
AIX, the following file sets must be installed:
DescriptionStateLevelFile set
COMMITTED6.0.0.13xlC.aix50.rte
COMMITTED6.0.0.0xlC.rte
COMMITTED6.0.0.0xlC.msg.en_US.rte
To find these xlC file sets and their levels on your AIX system, use the
command lslpp -l xlC*
Refer to the $LINK_DIR/log/DITC.log file for browser-related errors.
For Red Hat Linux 4, install the following patches:
•
glibc-2.3.4-2
•
libgcc-3.4.3-9.EL4
•
compat-libstdc++-296-2.96-132.7.2
•
compat-libstdc++-33-3.2.3-47.3
•
For Solaris 9, install the appropriate patch:
•
Install 111721-04 (for both 32-bit and 64-bit SPARC; see the vendor's
•
latest patch updates)
C Set ++ Runtime
for AIX 5.0
C Set ++ Runtime
C Set ++ Runtime Messages,
U.S.English
For Solaris 10 SPARC, install the following patch:
•
120470-01
•
34BusinessObjects Data Integrator Release Notes
Adapter interfaces
5
Adapter interfaces
5
JMS adapter interface
This section contains notes related to installation, configuration, and use of
the adapter interfaces provided with this release.
For installation details, see the Data Integrator Getting Started Guide, JMS
and Salesforce.com interface integration subsection.
JMS adapter interface
Find the full technical documentation for the Data Integrator Adapter for JMS
in the same directory as your other Data Integrator documentation.
The JMS adapter is generic and can work with the JMS libraries of any
•
JMS provider.
This version of the JMS adapter has been tested using Weblogic JMS
•
libraries (from BEA Systems) as the JMS provider.
If you are running the JMS adapter with any other JMS provider, you
•
should include the location of the third-party jar files associated with the
specific JMS provider.
On the Adapter Instance Configuration page, the classpath field contains
•
the list of Data Integrator-provided jar files. Append the location of the
JMS jar files to the classpath field.
Salesforce.com adapter interface
Find the full technical documentation for the Data Integrator Salesforce.com
Adapter Interface in the same directory as your other Data Integrator
documentation.
The Salesforce.com adapter Interface is compatible with Data Integrator
version 11.6 and later.
36BusinessObjects Data Integrator Release Notes
Documentation updates
6
Documentation updates
6
New features in version 11.7.3
Please note the following updates to the Data Integrator 11.7.2 technical
documentation. These changes will be incorporated into a future release of
the manual set.
Note: The numbers are ADAPT system Problem Report tracking numbers.
New features in version 11.7.3
HP Neoview database support
This version supports the HP Neoview database as a source or target via a
new datastore option in the Designer. Ensure the HP Neoview ODBC driver
is installed and configured on the client computer (where the Data Integrator
Designer and the Job Server are located).
For information on creating datastores, see the Data Integrator DesignerGuide, Datastores section and the Data Integrator Reference Guide, Data
Integrator Objects section, Datastore subsection.
For details on the ODBC options available for the HP Neoview datastore,
see the Data Integrator Reference Guide, Data Integrator Objects section,
Datastore subsection, ODBC subsection.
Inxight integration for reading unstructured text
Data Integrator version 11.7.3 provides support for extracting and
transforming contents from unstructured text by calling the Inxight SDX
(SmartDiscovery Extraction Server) via Web services. Two new functions
have been added to Data Integrator that enable you to pass data to Inxight
using the required base64 encoding.
The following procedure describes how to use Data Integrator with Inxight
SDX Web services.
1. In the Designer, create an adapter datastore that connects to the Inxight
SDX Web service.
2. From the Inxight adapter datastore, import an operation, which becomes
the adapter Web service function.
38BusinessObjects Data Integrator Release Notes
3. Open the adapter function and edit the schema as necessary, for example
4. In the data flow query, use the base64_encode function to encode the
5. Use another query transform to call the Inxight Web service function and
6. Use the base64_decode function to decode the returned result as plain
Related Topics
•base64_encode on page 39
•base64_decode on page 40
base64_encode
Returns the base64-encoded data in the engine locale character set.
Syntax
base64_encode(input data, 'UTF-8')
Documentation updates
New features in version 11.7.3
increase the column length of the base64 text column.
input text to base64.
pass the encoded text as input.
text again.
6
Return Value
varchar
Returns base64-encoded data. If the input data is NULL or the size is 0, Data
Integrator returns NULL. Otherwise, it returns the base64-encoded data that
conforms to RFC 2045.
Where
input data
UTF-8
The input data that needs to be encoded to base64. Does not
support long or BLOB data types.
The code page of the input data. UTF-8 is required for Data Integrator version 11.7.3.
BusinessObjects Data Integrator Release Notes39
Documentation updates
6
New features in version 11.7.3
Example:
You want to extract home address, city, state, and zip code information
from email content using the Inxight SDX Web service. One of the
requirements for the Inxight SDX Web service is to send the content in
base64-encoded format and to specify the character set of the content.
Inxight SDX Web service returns the extract type and name in
base64-encoded format.
For example, create a data flow with a flat file source that contains the data.
The column name of the content data is CONTENT.
Map the source in a query transform as follows. In the column mapping
editor for the CONTENT column, specify:
base64_encode(CONTENT, 'UTF-8')
Map this query to another query where you invoke the Inxight SDX Web
service and it gets a response in an NRDM schema called Extract Entities
Response. The extract fields have values in base64 encoding.
Map to another query transform and decode the base-64-encoded data as
in the following example:
base64_decode(extract_name, 'UTF-8')
Related Topics
•base64_decode on page 40
base64_decode
Returns the source data after decoding the base64-encoded input.
Syntax
base64_decode(base64-encoded input, 'UTF-8')
Return Value
varchar
40BusinessObjects Data Integrator Release Notes
Documentation updates
Getting Started Guide
Returns the source data after decoding the base64-encoded input. If the
input is NULL or the size of the data is 0, Data Integrator returns NULL.
Otherwise, it returns the base64-decoded data that conforms to RFC 2045.
Where
6
base64-encoded input
UTF-8
Related Topics
•base64_encode on page 39
The base64-encoded input data. Does not support long
or BLOB data types.
The code page of the output data. UTF-8 is required for
Data Integrator version 11.7.3.
Job Server enhancements
Using multithreaded processing for incoming requests, each Data Integrator
Job Server can now accommodate up to 50 Designer clients simultaneously
with no compromise in response time. (To accommodate more than 50
Designers at a time, create more Job Servers.)
In addition, the Job Server now generates a Job Server log file for each day.
You can retain the Job Server logs for a fixed number of days using a new
setting on the Administrator Log retention period page.
Finally, each Designer client only displays the logs for jobs executed from
that Designer, not from jobs executed using the Management Console,
command line, or Web services.
Getting Started Guide
Citrix Presentation Server version 4.0
The documentation that describes Citrix was removed from the Data Integrator
Getting Started Guide in a prior version. Data Integrator now supports Citrix
BusinessObjects Data Integrator Release Notes41
Documentation updates
6
Designer Guide
Presentation Server version 4.0. For more information, see documentation
for Citrix support on the Business Objects developer community Web site at
In the section (chapter) Installing Data Integrator in UNIX Systems in the AIX
user resource limits subsection, change the following line in the User resource
limits table:
From:
To:
CommentsValueUser resource limit
At least 2GB2097151data (kbytes)
CommentsValueUser resource limit
(blank)unlimiteddata (kbytes)
Designer Guide
Changed-data capture for Microsoft SQL Server 2005
Data Integrator now supports changed-data capture (CDC) for Microsoft SQL
Server 2005.
In chapter 19, Techniques for Capturing Changed Data, the section "Setting
Up SQL Replication Server for CDC" was written for Microsoft SQL Server
2000.
The following procedure applies to Microsoft SQL Server 2005.
42BusinessObjects Data Integrator Release Notes
Documentation updates
Designer Guide
To configure publications for Microsoft SQL Server
2005 CDC
1. Start the Microsoft SQL Server Management Studio.
2. Select the SQL Server, right-click Replication menu, then select New >
Publication. The New Publication Wizard opens.
3. In the New Publication Wizard click Next.
4. Select the database that you want to publish and click Next.
5. Under Publication type, select Transactional publication, and then click
Next to continue.
6. Click to select tables and columns to publish as articles. Then click to
open Article Properties.
7. Set the following to False: - Copy clustered index - Copy INSERT,
UPDATE and DELETE - Create schemas at subscriber
8. Set the "Action if name is in use" to "keep the existing table unchanged"
9. Set Update delivery format and Delete delivery format to XCALL <stored
procedure>. Click OK to save the article properties.
10. Configure Agent Security and specify the account connection setting.
Click Security Settings to set the Snapshot agent.
11. Configure the Agent Security account with system administration privileges
and click OK.
12. Enter the login password for the Log Reader Agent by clicking Security
Settings. Note that it has to be a login granting system administration
privileges.
13. In the Log Reader Agent Security window, enter and confirm password
information.
14. Click to select Create the publication then click Finish to create a new
publication.
15. To Complete the Wizard, enter a Publication name and click Finish to
create your publication.
6
BusinessObjects Data Integrator Release Notes43
Documentation updates
6
Designer Guide
Creating Microsoft Excel workbook file formats on
UNIX platforms
Note: The following new section will be added to Chapter 6: File Formats
in the Data Integrator Designer Guide.
This section describes how to use a Microsoft Excel workbook as a source
with a Job Server on a UNIX platform.
To create Microsoft Excel workbook file formats on Windows, see “Excel
workbook format” in the Data Integrator Reference Guide.
To access the workbook, you must create and configure an adapter instance
in the Administrator. The following procedure provides an overview of the
configuration process. For details about creating adapters, see Chapter 10,
“Adapters,” in the Data Integrator Management Console: Administrator Guide.
To create a Microsoft Excel workbook file format on
UNIX
1. Using the Server Manager ($LINK_DIR/bin/svrcfg), ensure the UNIX Job
Server can support adapters. See “Configuring Job Servers and Access
Servers” in the Data Integrator Getting Started Guide.
2. Ensure a repository associated with the Job Server has been added to
the Administrator. To add a repository to the Administrator, see “Adding
repositories” in the Data Integrator Management Console: AdministratorGuide.
3. In the Administrator, add an adapter to access Excel workbooks. See
“Adding and configuring adapter instances” in the Data Integrator
Management Console: Administrator Guide.
You can only configure one Excel adapter per Job Server. Use the
following options:
•On the Installed Adapters tab, select MSExcelAdapter.
•On the Adapter Configuration tab for the Adapter instance name,
type BOExcelAdapter (required and case sensitive).
You may leave all other options at their default values except when
processing files larger than 1 MB. In that case, change the Additional
44BusinessObjects Data Integrator Release Notes
Documentation updates
Reference Guide
Java Launcher Options value to -Xms64m -Xmx512 or -Xms128m
-Xmx1024m (the default is -Xms64m -Xmx256m). Note that Java
memory management can prevent processing very large files (or many
smaller files).
4. Start the adapter.
5. In the Designer on the Formats tab of the object library, create the file
format by importing the Excel workbook. For details, see “Excel format”
in the Data Integrator Reference Guide.
Note:
•To import the workbook, it must be available on a Windows file system.
You can later change the location of the actual file to use for processing
in the format source editor. See “Excel workbook source options” in the
Data Integrator Reference Guide.
•To reimport or view data in the Designer, the file must be available on
Windows.
•Entries in the error log file might be represented numerically for the date
and time fields.
Additionally, Data Integrator writes the records with errors to the output
(in Windows these records are ignored).
6
Reference Guide
The following updates apply to the Data Integrator Reference Guide.
Maximum number of loaders
ADAPT00667119
Data Integrator now supports a maximum number of loaders of 10.
The following sections describe the option Number of Loaders.
Chapter 2, Data Integrator Objects, Section "Target" in Table 2-30: Target
•
table options available in all datastores
Chapter 5, Transforms, section "Data_Transfer" in subsection "Target
•
table options"
BusinessObjects Data Integrator Release Notes45
Documentation updates
6
Reference Guide
Replace the first paragraph in these descriptions of Number of Loaders
with the following paragraph:
Loading with one loader is known as "single loader loading." Loading when
the number of loaders is greater than one is known as "parallel loading." The
default number of loaders is 1. The maximum number of loaders is 10. If you
specify a number greater than this maximum value, Data Integrator uses 10
loaders.
Reserved words
ADAPT00621691
Reserved words should not be used as the user name when you create a
Data Integrator repository.
In chapter 10, Reserved Words, the section "About Reserved Words" lists
the reserved words that should not be used as names for design elements.
Replace the first paragraph with the following paragraph:
The following words have special meanings in Data Integrator and therefore
should not be used as names for work flows, data flows, transforms, or other
design elements that you create. They should also not be used as user
names when you create a Data Integrator repository. They are reserved with
any combination of upper- and lower-case letters.
Microsoft Excel workbook format for UNIX
In chapter 2, Objects, "Descriptions of objects," the "Excel workbook format"
entry, in the Notes section replace the fifth bulleted item with the following.
For workbook-specific (global) named ranges, Data Integrator would name
•
a range called range as range. However for worksheet-specific (local)
named ranges, Data Integrator would name a range called range that
belongs to the worksheet Sheet1 as range!Sheet1.
In UNIX, you must also include the worksheet name when defining a
workbook-specific (global) named range.
46BusinessObjects Data Integrator Release Notes
Documentation updates
Salesforce.com Adapter Interface Guide
Salesforce.com Adapter Interface Guide
When you create a new Custom table in Salesforce.com and attempt to fetch
CDC data from that table from the Data Integrator Designer, if the CDC table
does not specify a starting date, Data Integrator will throw an error stating:
Error reading from <custom table name>: <There was an unexpected
error. Salesforce.com message is startDate before or replica
tion enabled date>.
Therefore, the "Using the CDC table source default start date" section of the
Salesforce.com Adapter Interface Guide should state the following:
When you do not specify a value for the start date:
Data Integrator uses the beginning of the Salesforce.com retention period
•
as the start date if a check-point is not available (during initial execution).
Data Integrator uses the check-point as the start date if a check-point is
•
available and occurs within the Salesforce.com retention period. If the
check-point occurs before the retention period, Data Integrator uses the
beginning of retention period as the start date.
6
Data Integrator may throw an error message. Business Objects
•
recommends that you specify a default start date for your CDC tables.
Please note the following correction to the Salesforce.com Adapter Interface
Guide version 11.7.0.0. These changes will be incorporated into a future
release of the manual set.
In Chapter 3, page 29, change the text under "Using the CDC table source
default start date" as follows:
When you do not specify a value for the start date:
Data Integrator uses the beginning of the Salesforce.com retention period
•
as the start date if a check-point is not available (during initial execution).
However, if a table is created within the Salesforce.com retention period
and a check-point is not available, the execution will return an error
message. Drill into the source object and enter a value for the CDC tablesource default start date. The value must be a date that occurs after
the date the table was created to work around this problem.
BusinessObjects Data Integrator Release Notes47
Documentation updates
Salesforce.com Adapter Interface Guide
6
48BusinessObjects Data Integrator Release Notes
Resolved issues
7
Resolved issues
7
Please refer to http://support.businessobjects.com/documentation for the
latest version of these release notes, which includes resolved issues specific
to version 11.7.3.
The numbers in the following table are sorted in ascending order and refer
to the Business Objects ADAPT system Problem Report tracking number.
ADAPT00534412
ADAPT00613822
ADAPT00619275
ADAPT00619336
When creating a BusinessObjects Universe from Data Integrator using either Microsoft SQLServer or Sybase datastores, the table names are not qualified with the proper
database name and owner name in the BusinessObjects
Universe. This issue is resolved. Customers using Universe
Builder must apply the BusinessObjects Enterprise CHF-13
patch level or later for this fix to take effect. Contact Customer Support for information on how to download CHF
patches.
See also ADAPT00534412 in the CHF-13 documentation
for additional information regarding this fix.
When a DTD schema contains a standard XML header,
adapters could not import the schema. This issue has been
resolved.
In this release, Data Integrator uses a new version of the
Xerce2 library that, when using Data Integrator Web services, allows you to import a WSDL file that previously
caused the following error:
XML parser failed: Error <An exception occurred! Type:NetAccessorException.
This issue has been resolved.
When editing column and table descriptions in a datastore,
the descriptions sometimes got lost. This issue has been
addressed.
ADAPT00619350
50BusinessObjects Data Integrator Release Notes
Data Integrator always works in binary collation for the Table
Comparison transform. If the comparison table sort order is
not binary, Data Integrator will generate inconsistent results
between cached, row-by-row, and sorted comparison
methods.
Resolved issues
7
ADAPT00619352
ADAPT00619355
ADAPT00619357
ADAPT00619387
ADAPT00619393
ADAPT00619403
ADAPT00619413
When using regular loading and bulk loading for Microsoft
SQL Server targets, decimal data rounding was not the
same with the two loading methods. This problem has been
addressed.
In metadata Reports, the start timestamp for data flows
displayed incorrectly. This problem has been fixed.
In a data flow with an XML source containing a join with a
nested query, the WHERE clause and ORDER BY clause
now compile properly without getting an Unknown type
error.
The push-down capability of Informix sources has been
improved. Data Integrator now pushes down more queries
to the Informix database for evaluation, and you can expect
better performance in those scenarios.
Metadata Reports was not showing column-mapping information. This is no longer an issue.
When importing Microsoft SQL Server tables from the Designer, locks would not release on SQL Server's tempdb
database. This problem has been resolved.
When working with the Query transform in the Data Integrator Designer, outer join entries sometimes disappeared from
the Query transform when modifying other parts of the Query
transform. This problem has been fixed.
ADAPT00619414
ADAPT00619421
The Profiler can now profile Microsoft SQL Server database
tables that use NT authentication.
When importing a SQLServer table that contained an index
with the index name size greater than 64, the table import
operation failed. This problem has been resolved.
BusinessObjects Data Integrator Release Notes51
Resolved issues
7
ADAPT00619427
ADAPT00619438
ADAPT00619479
For Microsoft SQL Server datastores, if the table owner
name is DBO and the CMS connection is also Microsoft
SQL Server with a login user name of sa, the updated Universe will determine the new table schema to be new because it compares DBO to sa as the owner name.
Therefore, do not use Microsoft SQL Server as a CMS
connection with sa as the login name.
Data Integrator generated incorrect outer join syntax when
accessing DB2 on AS/400. This issue has been fixed.
Due to a known Informix issue, two phase commit (XA)
transactions in IBM® Informix® Dynamic Server could cause
Data Integrator to fail when loading the data to an IBM In
formix Dynamic Server or profiling data with Informix. The
typical Informix errors are:
Could not position within a table
and
Could not do a physical-order read to fetch
next row
The reference number from the IBM site is Reference#
1172548
An issue with post-load commands in the table loader getting
corrupted has been fixed.
After upgrading Data Integrator from 11.0 to 11.5 or 11.7,
metadata Reports failed with a Java error. This issue has
been fixed.
When using the Profiler to profile a Microsoft SQL Server
table containing columns of text data type, the profiling
server now works properly.
Resolved issues
7
ADAPT00619835
ADAPT00619871
ADAPT00619893
ADAPT00619934
ADAPT00619969
When running metadata Integrator with a clustered CMS
environment, CMS computer names were used when
gathering metadata. This caused the CMS computers to
appear as separate systems in Impact and Lineage reports.
In this release, the metadata integrator gathers the name
of the cluster. Because of this change in behavior, if you
are using a clustered environment and have already run the
metadata Integrator against the CMS system, you need to
delete the data previously collected. Please contact Business Objects Customer Support for details on how to perform this task.
When using an Informix version 7.3 datastore and client
software 2.90, the Informix datastore now correctly imports
tables into Data Integrator.
If the NLS_LENGTH_SEMANTICS flag was set to char in
a UTF-8 encoded repository, the profiler did not work properly for Oracle databases. This problem has been fixed.
The Data Integrator function is_valid_datetime() function
returned true if the hour contained a value greater than 24.
This problem has been fixed.
Data Integrator jobs sometimes exited abnormally on the
Linux platform if there was a lookup_ext function call to an
Oracle database with a datetime column in the condition
list. This fix applies to all UNIX platforms.
ADAPT00620002
ADAPT00620025
ADAPT00620158
ADAPT00620250
On Windows XP operating systems, columns in the query
editor now appear correctly highlighted.
In the Administrator, you can now sort schedules by job
name.
The Difference Viewer no longer shows that a column's
data type and nullable attribute change after changing the
owner name of the table.
Data Integrator did not support any non-ASCII characters
in which the eighth bit was being used in the metadata (for
example the umlaut in German). This is no longer an issue.
BusinessObjects Data Integrator Release Notes53
Resolved issues
7
ADAPT00620297
ADAPT00620308
ADAPT00620314
ADAPT00620336
ADAPT00620338
ADAPT00620345
ADAPT00620352
Fixed-width file formats were not being created correctly
from existing flat files. This problem has been fixed.
When the Data Integrator-imported table schema contains
a long column that does not match its associated data type
in the database, it could cause an access violation error in
the Data Integrator engine. This problem has been resolved.
When a real-time job terminated, two more instances of the
jobs started. This problem has been fixed. Real-time jobs
now start only once if terminated abnormally.
The documentation for Catch error types and groups was
unclear. The Reference Guide and Designer Guide have
been corrected to clarify that you specify exception groups,
instead of individual error numbers, in a Try/Catch block.
The Designer could not import materialized views on an
Oracle 10g database. This issue has been addressed.
When validating a large job, the Designer sometimes
aborted because of depleted memory available on the system. This problem has been fixed.
When monitoring job execution, on the Monitor tab, the
columns Row count and Elapsed Time were not sorted in
ascending order. This problem has been fixed.
ADAPT00620375
ADAPT00620377
ADAPT00620665
54BusinessObjects Data Integrator Release Notes
Data Integrator license keys sometimes did not work with
dual Ethernet cards. This problem has been resolved.
When a DTD schema contained a standard XML header,
adapters could not import the schema. This issue has been
resolved.
When data type conversion was required in a Query mapping (for example when mapping a varchar column to an
integer column), the Designer displayed warnings during
validation, but an error occurred at runtime. This issue has
been addressed.
Resolved issues
7
ADAPT00620689
ADAPT00620882
ADAPT00620958
ADAPT00621069
ADAPT00621143
The Data Integrator index() function was not pushed down
to Oracle. The fix addresses this problem. Please note that
there are slight differences between executing the index
function within Oracle and executing the index function
within Data Integrator. Check the instr() function documentation in the Oracle documentation for details.
Using the functions lpad and rpad sometimes lead to
crashes such as an access violation. This problem has been
fixed.
Jobs that were running did not get displayed in the Administrator if the log retention period had been reached. This
problem has been fixed.
The following error was encountered when working with
ABAP programs:
ABAP SYNTAX ERROR Without the addition "CLIENT
SPECIFIED", you cannot specify...
This problem has been resolved.
When using a variable of type datetime in the Reverse Pivot
transform in the Pivot Axis, the following warning is no longer
thrown: "Unexpected axis value <2003-08-15> found in
transform". This issue is resolved in this release by providing
variable support in the Reverse Pivot transform.
ADAPT00621184
ADAPT00621198
ADAPT00621229
ADAPT00621278
The passwords in some of the page of the Administrator
are now encrypted properly. The text of the password cannot
be viewed under any circumstances now.
When using multiple datastore configurations for SAP
datastores, the folder names between the configurations
are no longer switched.
WSDLs generated by Data Integrator can be imported into
a .NET environment.
In this release, Data Integrator has been enhanced to generate DTDs that can handle multiple rows.
BusinessObjects Data Integrator Release Notes55
Resolved issues
7
ADAPT00621334
ADAPT00621340
ADAPT00621996
ADAPT00622074
ADAPT00622121
ADAPT00622188
Data Integrator can now import an XML file as a large varchar column. Please note that when loading this data into
a target database, most databases have a maximum size
restriction for their character data columns. Note the restrictions for your target database and design the Data Integrator
jobs accordingly.
When a job has multiple validation transforms, the labels
that appear in the DI_ERRORCOLUMN target table are now
correct.
When entering Japanese characters in the Case Transform
labels, you will no longer get an "Internal Application Error"
when saving the dataflow.
If you entered an invalid user name or password in the Administrator, the values remained in the fields. The password
field now clears.
When using a variable in the Error file name field in the flat
file editor, a runtime error occurred when running the job.
This problem has been resolved.
LONG and TEXT data type columns do not get profiled.ADAPT00622137
It is now possible to enter a text delimiter in the flat file format. The format rules and limitations for the text delimiter
are the same as the rules applicable to the field and column
delimiters.
ADAPT00622190
ADAPT00625180
ADAPT00627739
56BusinessObjects Data Integrator Release Notes
If load triggers were set up in a table loader, sometimes
Data Integrator did not execute the regular load. This problem has been resolved.
Sometimes Data Integrator wrote incorrect data to the target
loader overflow file if using the data type char. This issue
has been solved.
The following error sometimes occurred when Data Integrator processed a complex nested COBOL copybook: Wrong
sequence number for field ...
This problem has been addressed.
Resolved issues
7
ADAPT00629178
ADAPT00631055
ADAPT00632204
ADAPT00633094
ADAPT00633872
ADAPT00633885
In a Case transform, when entering a tab character in the
expression field and saving the data flow, the text following
the tab character disappeared. This issue has been fixed.
While loading data from a source database to a target
database, Data Integrator generated the ORA-01461 error
when using an Oracle database as repository. The
NLS_CHARACTERSET of Oracle database is AL32UTF8.
This problem has been fixed.
During the export of a datastore, you could not change the
owner of a table or database stored procedure. This problem
has been fixed.
If a SQL transform was used in a data flow and the transform
did not define an output schema, the data flow was corrupted. This problem has been resolved.
The JMS interface is not supported with IBM MQ Series 6.0.ADAPT00633774
The Data Integrator job failed when it contained large
COBOL copybook schemas. This issue was addressed by
modifying the internal Data Integrator language generation
to eliminate unnecessary fields and attributes.
Designer crashed when performing View data with a filter
condition on file formats like COBOL copybook. The issued
has been fixed.
ADAPT00633919
ADAPT00634484
ADAPT00635201
If a Cobol copybook contained a REDEFINES clause and
it was used together with the OCCURS clause, Data Integrator interpreted the data incorrectly.
If a WSDL from a third party already contains
xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\", the
Data Integrator Web Service could not import it. This problem was caused by Data Integrator automatically adding
the string while getting the XSD for the message. This
problem is fixed.
In the Data Integrator Designer, the trace logs are now
sorted properly by date when clicking on the Date column
header.
BusinessObjects Data Integrator Release Notes57
Resolved issues
7
ADAPT00635698
ADAPT00635475
ADAPT00637736
ADAPT00638268
ADAPT00641650
ADAPT00642513
In the DB2 table loader, if the Use input key option is selected and if any generated UPDATE statements updated
more than one row, the load failed. This problem has been
fixed.
A new option has been added for the Sybase IQ bulk loader
that allows you to specify the row delimiter in the data file
generated by Data Integrator.
In a real-time job, using the same data flow more than once
caused an incorrect SQL INSERT statement to be generated. This issue has been addressed.
When running Data Integrator with SAP R/3 version 3.x, the
following error could occur:
BODI-1112339:Error:Cannot import the metadata
table <name=*-CCSS-KOSTL>. R/3CallReceive er
ror <function Z_AW_TREE_IMPORT40: Call (PER
FORM) to a non-existent routine.
This problem has been fixed.
When using the Ignore row marker option in the flat file
format, data truncation could occur. This problem has been
fixed.
When importing a database link created in Microsoft
SQLServer 2005, the Designer did not recognize the
database link as a valid link. This issue has been addressed.
ADAPT00642531
ADAPT00642556
58BusinessObjects Data Integrator Release Notes
When a COBOL copybook contained REDEFINES sections,
the data for those fields was not processed properly by
Data Integrator. This issue has been resolved.
When a data flow or a work flow runs in an infinite loop, the
Data Integrator engine process (al_engine) leaked memory.
This problem has been fixed. Data Integrator also now
submits a slightly different SQL statement to Microsoft SQL
Server to improve performance when retrieving changed
data from the SQL Server replication server distribution
database.
Resolved issues
7
ADAPT00644740
ADAPT00645348
ADAPT00645757
ADAPT00646862
When defining a custom function in the Designer, the order
of the input parameters was reversed in the function. This
problem has been fixed.
In Auto Documentation, the Case Transform did not display
correctly. This problem has been fixed.
The Repository Manager failed if you upgraded a repository
from a computer with a Designer-only Data Integrator installation. This problem has been fixed.
For Microsoft SQLServer CDC readers, there is a new option
in the source editor called Automatically delete rows afterreading. This option lets you delete rows after reading them
from the replication server to manage server volume. For
example, if many rows collect on the replication server during a given retention period, performance could suffer.
If you select this option, Data Integrator deletes only the
rows that have already been read by all Data Integrator
CDC readers and other named subscriptions (and also observes the retention period setting). However, if there are
additional anonymous subscribers, Data Integrator is unable
to identify them and could delete rows regardless of whether
they have been read by those subscribers.
A CDC reader subscription is not established until the first
time a job with such a subscription is successfully executed.
If in the CDC reader Automatically delete rows afterreading option is selected, and this reader with this subscription has not been executed before in any other data flow,
Data Integrator will assume that there is only one subscription and the data will be purged after reading. This behavior
will stop after the job has been executed at least once. If
that is not the desired behavior, try these workarounds:
Do not enable Automatically delete rows after
•
reading for the first run of the Data Integrator job to
establish all subscriptions.
If a CDC table has more than one subscription, check
•
the Delete after reading option in one CDC source
only and execute the corresponding job last.
BusinessObjects Data Integrator Release Notes59
Resolved issues
7
ADAPT00647533
ADAPT00647923
ADAPT00648523
ADAPT00648631
ADAPT00652463
ADAPT00655407
The function get_domain_description() did not return a value
when used with PeopleSoft 8.9 domains. This problem has
been fixed.
The Performance Optimization Guide incorrectly stated that
if a data flow was not valid or saved, Data Integrator alerted
you when you tried to view optimized SQL for the data flow.
The guide has been corrected to state "If a data flow is not
valid, Data Integrator alerts you."
Fixed-width file formats were not generated correctly if the
data file was located on a remote server. This problem has
been fixed.
When logging off from Windows where the Job Server is
running, the Data Integrator jobs launched against this Job
Server will now continue to run.
The following error:
Unknown exception in rdr::open()
occurred when reading an XML file using the Designer. This
problem has been fixed.
The combination of UTF8 engine code page and MAP_CDC
transform no longer generates the following warning: invalid
Value <> found for row operation column. Previously, this
problem only appeared on UNIX platforms.
ADAPT00655691
ADAPT00655863
ADAPT00656102
60BusinessObjects Data Integrator Release Notes
Data Integrator sometimes generated virtual memory errors
while executing jobs containing Oracle sources with varchar
columns. This issue has been addressed.
Data Integrator Web Services only supports the UTF-8 code
page.
When reading a COBOL copybook that contains a REDEFINES clause, the data was not aligned properly when
viewing the data from the Designer. This issue has been
addressed.
Resolved issues
7
ADAPT00656803
ADAPT00657456
ADAPT00658163
ADAPT00658378
ADAPT00658811
ADAPT00659178
Dataflow now works correctly when a column in the Map
Transform is designated as "optional" and the column is not
mapped.
Validation conditions were not displayed in Auto Documentation. This issue has been addressed.
If there were a large number of logs in a Data Integrator instance, the Designer might have appeared to freeze when
launched. If there were real-time services configured for this
instance, the services could fail due to the delay. This issue
has been resolved in this release.
Data Quality transforms were not supported for real-time
jobs.
When creating a Data Integrator repository on IBM DB2,
only the UTF-8 code page is supported.
When executing the Table Comparison transform in rowby-row mode with the History Preserving transform, Data
Integrator added an extra current row to the output when
encountering a certain type of input row. This situation occurred when the input of the Table Comparison contained
rows where the primary key data and the compare column
data were the same as the current row in the target. This
issue has been addressed.
ADAPT00659451
In Auto Documentation, if a data flow contained a While
loop, the While condition and the data flow did not display
correctly. This issue has been addressed.
BusinessObjects Data Integrator Release Notes61
Resolved issues
7
ADAPT00660212
ADAPT00661720
ADAPT00667007
ADAPT00668842
ADAPT00671056
A new utility FixupDIMapText is included in this release to
allow you to clean up a repository that could have corrupted
mappings in the data flows. If you have noticed that when
you change mapping values that the changes are not reflected after saving the data flows, you need to run this utility.
The usage of the utility is as follows:
The Salesforce.com adapter occasionally encountered the
Java error OutofMemory. This problem has been fixed.
When using the bulk loader option in a Netezza target table,
Data Integrator does not truncate spaces to null for character
columns.
When performing Impact and Lineage analysis with Business
Objects 6.5, the Universe impact does not contain WEBI
document information.
The metadata Integrator generated an error when the configuration instance name was set to something other than
the default. This problem has been fixed.
ADAPT00672346
ADAPT00673438
ADAPT00673927
ADAPT00674212
62BusinessObjects Data Integrator Release Notes
When using the smtp_to function to send e-mail, the send
time and receive time were not synchronized. This problem
has been fixed.
Data Quality Project descriptions are not imported into Data
Integrator Object Library.
In some cases, Data Integrator did not release connections
to the Job Server and depleted system resources. This
problem has been resolved.
When using the View Data functionality with a SQL Server
2005 CDC table, the following error no longer occurs if the
last column in the table is a numeric type: "cannot convert
data <0> into type <DECIMAL> Context".
Resolved issues
7
ADAPT00674474
ADAPT00675249
ADAPT00674212
ADAPT00678466
ADAPT00681355
ADAPT00682563
ADAPT00682686
When editing Job Server information using the Server
Manager, all of the Job Servers in the Server Groups appear.
On Linux, completed exec() function processes in a job
stayed in a defunct state. This issue has been fixed.
When using the View Data functionality with a SQL Server
2005 CDC table, the following error no longer occurs if the
last column in the table is a numeric type: Cannot convert
data <0> into type <DECIMAL> Context Column.
When using the Oracle char size semantic to define a column data type as char or varchar, Data Integrator no longer
invokes an Oracle error at runtime when performing the
binding process for the column.
In an Salesforce.com adapter, a CDC table can handle
changes that are equal to or greater than 2000 records.
Importing an existing Data Quality project does not invalidate
the data flow.
When inserting a string of blank characters to an Oracle
column of data type char or nchar, blanks are inserted.
ADAPT00683039
ADAPT00683167
ADAPT00685077
ADAPT00685406
When a Job creates hundreds of DB2 connections to evaluate lookup function calls, Data Integrator no longer unnecessarily replicates them.
When creating sources or targets from tables from a datastore that uses persistent cache, view data operation is now
available.
Changes to the Data Quality datastore parameters were
not reflected in the properties page of the Data Quality
transform. This issue has been resolved.
When exporting a data flow containing lookup_ext functions
to a flat file, validation of the data flow sometimes failed
when importing the data flow back into the Designer. This
problem has been fixed.
BusinessObjects Data Integrator Release Notes63
Resolved issues
7
ADAPT00685442
ADAPT00686674
ADAPT00687185
ADAPT00687873
ADAPT00688040
ADAPT00688861
ADAPT00688976
If you use a non-US time zone with the Salesforce.com
adapter, you will no longer encounter a problem when inserting/updating records.
In the Windows environment, Data Integrator Web services
no longer shut down when users log off.
A Java error is not encountered when using the print functionality in Auto Documentation.
When performing View Data on a column of type CHAR,
the data is no longer double-quoted to the full length of the
column. In this release, the double quotes are not displayed
when viewing the CHAR data.
When you view monitor logs in the Administrator, the
memory usage of the job server process no longer increases.
If the Job Server dies unexpectedly, all the adapters still
keep on running. This problem is resolved. The adapters
now exits gracefully in this situation.
If the Web Services adapter does not work due to an incorrect keystore path, specify the correct keystore path then
restart the Tomcat Web server.
ADAPT00689618
ADAPT00689651
ADAPT00691549
ADAPT00691896
ADAPT00692414
64BusinessObjects Data Integrator Release Notes
You must have the same schema name as the MYSQL
datastore user name to make data transfer transforms work
with the automatic option.
When using a function inside the decode function, the parameter list of the nested function is processed properly.
The ordering of the parameter is not reversed.
If Data Integrator functions are used in the WHERE clause
in the Query transform, outer joins now evaluate properly.
When calling a SQLServer 2000 stored procedure from
Data Integrator, the varchar parameter value is now passed
correctly.
When viewing complex jobs in Auto Documentation, the
print functionality no longer fails.
Resolved issues
7
ADAPT00694262
ADAPT00695487
ADAPT00695492
ADAPT00695837
ADAPT00696321
ADAPT00696333
ADAPT00696434
Auto Documentation for Data Quality transform now shows
the description for the Data Integrator input fields. The unused Data Quality output fields are not displayed without
showing the data type.
Both WIndows and UNIX server configurations now allow
central repositories to be associated to a Job Server.
A warning message is not thrown when you attempt to login
to the central repository.
When running a real-time job that contains an XML message
format as a source as well as an XML format in the transform, you might encounter an error with WSDL generation
if the real-job is published via the Administrator. This problem has been addressed in this release. Open the job in the
Designer and re-save it for this fix to take effect.
In German regional settings, Teradata bulkloading loads
the correct decimal data to target.
Data Integrator correctly reads data from a COBOL copybook that is in a German regional setting operating system.
When using the validation transform on Solaris, the job no
longer exits incorrectly.
ADAPT00700221
ADAPT00700254
ADAPT00700991
ADAPT00701078
You can now use a lookup_ext function as part of the input
schema to the pivot transform as a non-pivot column.
Table comparison now provides the correct results when
the input row and/or target table row have null values.
This release of Data Integrator includes fixes to address the
new U.S. Daylight Savings Time rule change. For this fix to
take effect, apply a DST patch from the operating system
vendor to the computers that host Data Integrator components.
Data Integrator correctly imports XMI files using metadata
Exchange functionality.
BusinessObjects Data Integrator Release Notes65
Resolved issues
7
ADAPT00702076
ADAPT00702312
ADAPT00702353
ADAPT00702817
ADAPT00704503
ADAPT00706671
Data Integrator is now compatible with Salesforce.com
version 7.0 API. With this API upgrade, Data Integrator can
now support custom lookup functions. Any other new functionality specific to Salesforce.com 7.0 is not supported. To
ensure that your existing Salesforce.com adapter works
properly, update the URL in the datastore so that it points
to Salesforce.com 7.0 API. The new URL is
https://www.salesforce.com/services/Soap/u/7.0.
Data Integrator can now use the SQL Server to load tables
with an owner name that is different from the one used in
the datastore.
The metadata Integrator does not fail when collecting Universes stored in a subfolder.
When using Auto Documentation to generate a PDF file,
the java error with NullPointer exception no longer occurs.
When you specify the row delimiter '/10' (line feed character)
in a flat file, Data Integrator no longer interprets this character on the Windows and UNIX platforms differently.
When reading a flat file containing varchar data that does
not match the schema of the file format, the Job no longer
exits abnormally.
ADAPT00709026
ADAPT00709500
ADAPT00711853
ADAPT00712157
66BusinessObjects Data Integrator Release Notes
When comparing a local repository object to a central
repository object in a multiuser environment, the Designer
no longer exits abnormally if you are using custom SQL in
the column mapping.
When Data Integrator reads containing Occurs Depending
On Items with multiple counters for a single record, Data
Integrator recognizes only the first counter item. In this release, Data Integrator is able to handle multiple Occur Depending On clauses when reading COBOL Copybooks.
In chapter 4 of the Reference Guide, the information for
varchar data type is now correct.
The "view from end" functionality in the Web Administrator
is restored.
Resolved issues
7
ADAPT00712275
ADAPT00713935
ADAPT00716264
ADAPT00716293
ADAPT00716886
ADAPT00717175
ADAPT00717659
For a Microsoft SQL Server repository created In Data Integrator version 11.0, upgrading to 11.6.0 or 11.7.0.0 would
fail if the table AL_CMS_REPORTS was not empty. This
problem has been fixed.
Oracle repository upgrade now works with SQL when upgrading from 11.5.1.5 to 11.7.0.0.
MySQL version 5.0 can now be used as a repository for
Data Integrator. Creation of repositories is now supported.
In the Management Console under Impact and Lineage,
Universes and reports belonging to a user now appear automatically in the "Objects to analyze" window.
When using the Oracle CDC subscriber user as the login
user of the Oracle CDC datastore you can now access CDC
data. The owner name for the internal views are correctly
generated.
Data Integrator Designer issues cursors with hold on DB2
system catalog tables. This issue has been resolved by
submitting a commit statement after read access.
metadata Integrator does not fail when collecting Desktop
Intelligence report information.
ADAPT00718127
ADAPT00720453
ADAPT00720989
When there was a mismatch between the Sybase server
version and the Sybase client version (such as 12.5.3 for
Sybase ASE server and 12.0.0 for Sybase client), the Data
Integrator job could fail. This problem was from incorrect
error message processing sent from Sybase to Data Integrator. This issue has been resolved.
When using a file in the lookup_ext function, Data Integrator
now cleans up the file handle properly so files are correctly
locked.
When viewing data in the Designer and it is running in a
Japanese locale, error messages from the View Data module were not legible. This issue has been resolved.
BusinessObjects Data Integrator Release Notes67
Resolved issues
7
ADAPT00721616
ADAPT00722498
ADAPT00723254
ADAPT00724995
ADAPT00726764
ADAPT00729550
Repository upgrades work when starting from repository
version 6.1 to 11.x if the repository contains a datastore with
ms874 code page.
The to_date function was not pushed down to the Oracle
database. This issue could happen if in the format string of
the to_date function, you had specified FF instead of ff to
indicate fractional second. This issue has been resolved.
After upgrading an Oracle pre-11.7.0 central repository to
11.7.0, you can now access the Object State Report via the
Data Integrator Management Console.
On Windows 2003, the Data Integrator engine no longer
fails when launched from the Designer and the Management
Console.
When using the table comparison transform with duplicate
input rows and sorted input options together, UPDATE rows
are now detected.
The following warning message no longer appears when
the Rows per commit option has been set to 1:
Warning: Rows per commit for table loader has
been reset to 1 because table <T_LONG_TEST>
contains a LONG column
This issue has been resolved.
ADAPT00729957
ADAPT00730595
ADAPT00730703
ADAPT00731785
68BusinessObjects Data Integrator Release Notes
Web servicesnow handle outbound calls properly if the
message name is the same as the referenced element
name.
When Data Integrator is running in UTF-8 mode, the Lookup
function now returns the correct result when comparing a
decimal number with a database that is also in UTF-8 mode.
When Japanese characters are used in function raise_exception(), the messages displayed when the function is
called are now correct.
The Data Integrator Profiler now correctly detects distinct
rows if there is a large number of distinct rows in the table.
Resolved issues
7
ADAPT00733805
ADAPT00736500
ADAPT00738912
ADAPT00738936
ADAPT00740642
If the input to the Data Quality transform is not mapped and
the output of the transform is mapped, Data Integrator
freezes when the executing the job. Ensure the transform
is properly mapped before running the job.
Under the following circumstances, the Profiler could stay
in the pending state indefinitely:
The repository is not connected to a Job Server.
•
The associated Job Server is not running.
•
When performing an outer join with the "IN" keyword in the
WHERE clause, Data Integrator now correctly generates
the correct SQL to push-down to the database.
There is a known issue with Data Quality projects that contain periods in the field names. Although the projects can
be imported from the Data Quality server, Data Integrator
cannot correctly map these fields. The project can be used
in a data flow as long as the fields containing periods are
not used. A workaround for this problem is to rename any
Data Quality project fields to use another character instead
of the period. Business Objects suggests using the underscore character (_).
When using the Sybase IQ bulk loader with a target table
schema that does not have the same number of columns
as the source table schema, Data Integrator jobs could fail
with an Access Violation error. This problem has been
resolved.
ADAPT00741303
ADAPT00741570
The Message Client sample (client_sample) program in the
AccessServer directory has not been updated for this release. Please check the Business Objects Knowledge Base
for any updates.
When an object such as work flow was renamed in the local
repository, Data Integrator threw an error indicating that the
dependent objects did not exist. This issue has been resolved.
BusinessObjects Data Integrator Release Notes69
Resolved issues
7
ADAPT00742231
ADAPT00742242
ADAPT00743816
ADAPT00743879
ADAPT00746019
When the order of the columns in the query transform is
different from the input columns of a Microsoft SQL Server
bulk loader, the load operation now works correctly.
If you have the bulk loader specified in the target, you
change the order of columns in the query, and you execute
the job without trace enabled, the job execution no longer
fails.
When the SQL SERVER NCHAR column is used in the
primary key and compared column, the correct SELECT
statement is selected for table comparison transform rowby-row option.
Data Integrator might not behave correctly when using the
table comparison transform with the row-by-row option. This
could happen if the compared table was in an Oracle UTF8 database and an NCHAR column was used as a primary
key or compared column. In that case, Data Integrator
generated an incorrect SQL SELECT statement. This issue
has been resolved.
When publishing a job as a Web service with the Enablejob attributes option, you might have encountered the following error when importing the WSDL:
DIBODI-1112015: adapter metadata import failed:
error parsing
This issue has been resolved.
ADAPT00749771
ADAPT00750372
70BusinessObjects Data Integrator Release Notes
When the escape character in a flat file appears as part of
the data, Data Integrator now escapes the character properly.
A new flag has been introduced to provide read-uncommitted
capability for Informix datastores. To use read uncommitted
data for Informix datastores, from the Designer, choose
Tools > Options > Job Job Server > General. Enter
al_engine in the section field, InformixReadUncommitted
for the key, and TRUE for the value.
Resolved issues
7
ADAPT00751136
ADAPT00751141
ADAPT00751731
ADAPT00752299
ADAPT00755702
ADAPT00755958
When exporting a data flow to another repository and the
owner of the datastore changed in the process, the new
owner name was not reflected in the lookup function in the
exported data flow. This issue has been resolved.
Creatiing embedded data flows failed when the content
contained a validation transform. This issue has been resolved.
When loading to a flat file field defined as a DECIMAL(12,0)
or DECIMAL(15,0) with formatting #,##0.0 enabled, Data
Integrator truncated the digits to the left of the decimal point.
This issue has been resolved.
Data Integrator generated the incorrect PL/SQL for an Oracle loader if the column name was quoted. This problem
has been resolved.
In the Validation transform editor, dragging a column into
the editor from a datastore and adding a condition invoked
the BODI-1112468 error. This issue has been resolved.
When using Auto Documentation to view a column mapping
that contained a Key Generation function, the mapping text
did not display properly in the HTML format. This issue has
been resolved.
ADAPT00755960
ADAPT00761033
ADAPT00761605
ADAPT00763665
When launching multiple jobs at the same time, some jobs
might not run and produced an error Could not find the
GUID. This issue has been resolved.
When using SFDC Boolean data in the WHERE clause of
a Data Integrator query transform, Data Integrator did not
interpret the Boolean values properly. This issue has been
resolved in this release.
When exporting a Cobol copybook, the atl file did not show
the output schema. The problem has been solved.
When Data Integrator pushed down a query to DB2 that
contained the to_char function, the DB2 syntax generated
by Data Integrator was incorrect. This issue has been resolved.
BusinessObjects Data Integrator Release Notes71
Resolved issues
7
ADAPT00764861
ADAPT00765394
ADAPT00765441
ADAPT00765502
ADAPT00767564
ADAPT00770720
In this release, Data Integrator supports Sybase IQ 12.7 as
a source and target. This support is available on all platforms. Note that in 11.7.2.2, support for this version of
Sybase IQ was available on Windows only.
In this release, Data Integrator supports SAP BI 7.0 (formerly
known as SAP BW). This connectivity is available on all
platforms. Note that previous releases of Data Integrator
already provided supported to some of the platforms.
In this release, Oracle E-Business Suite Release 12 support
has been added on all platforms. Note that in 11.7.2.3 this
support was already available on Windows and Linux.
When turning on the trace for the loader from the Designer,
the row level data did not display. This issue has been addressed.
When using Metadata Exchange functionality to export
metadata to XML ERWin 4.x format, the NULL option values
of the columns were reversed. This issue has been resolved.
Data Integrator did not allow spaces between column names
and the ! character. This issue has been resolved in this
release.
ADAPT00770958
72BusinessObjects Data Integrator Release Notes
After modifying the input or output parameter of a stored
procedure, Data Integrator did not retain the change after
restarting the Designer. This issue has been addressed.
Resolved issues
7
ADAPT00773839
When using table sources from a DB2 datastore or ODBC
datastore with a WHERE clause in the Query transform, the
WHERE clause for that source might not be pushed down
to the database. This situation could happen if the WHERE
clause contains a concatenation operation. The WHERE
clause will be evaluated inside Data Integrator. Note that in
Data Integrator, the evaluation of concatenation operator
with NULL operands might be different from the DB2 or the
ODBC data source evaluation rule. In Data Integrator, a
value concatenated with NULL value results in the value itself. This might not be the case with DB2 or generic ODBC
sources.
This release includes a flag to allow the WHERE clause in
the Query transform to be pushed down to DB2 or ODBC
when possible. Note that depending on the data flow construction, Data Integrator may or may not push down the
WHERE clause. If this flag is turned on, the concatenation
of NULL values in the WHERE clause may have different
behavior in different parts of the data flow depending on
whether Data Integrator is evaluating the expression. In order to avoid the discrepancy in this evaluation, Business
Objects recommends that you use this flag only when there
is no concatenation operator used with DB2 sources and
ODBC sources in your repository.
To enable the WHERE clause containing a concatenation
operator for DB2 and ODBC sources to be pushed down,
in the Designer choose Tools > Options > Job Job Server
> General. Enter al_engine in the section field, enter
PushdownODBCDB2Concat for the key, and enter TRUE for
the value.
ADAPT00774266
ADAPT00774804
When saving a data flow in the Designer with the Calculatecolumn mapping option enabled, the memory consumption
of the Designer could increase to the point that all available
virtual memory was consumed and the Designer exited abnormally. This problem has been resolved by eliminating
unnecessary memory copy operations.
In this release, Data Integrator is cerfieid to work with Microsoft Internet Explorer version 7. This certification is
available on Windows only.
BusinessObjects Data Integrator Release Notes73
Resolved issues
7
ADAPT00774812
ADAPT00774892
ADAPT00774914
ADAPT00774940
ADAPT00777105
ADAPT00777107
In this release, Data Integrator supports Sybase ASE version
12.5 on Solaris 64-bit and AIX 64-bit platforms as source,
target, and repository. Note that on other platforms Sybase
ASE version 12.5 was already supported in prior versions
of Data Integrator.
In this release, Data Integrator supports DB2 version 9.1 as
a source, target, and repository. This support is available
on all platforms. Note that in 11.7.2.3, this certification was
already available on Windows and Linux platforms.
In this release, Data Integrator has been certified to be
compatible with BusinessObjects Enterprise version XI R2
SP3.
In this release, Data Integrator has been certified on Microsoft Windows Server 2003 R2 SP2.
When using Metadata Exchange to generate XML metadata
in ERWin 4.x format, the size of the CHAR data type
columns were not showing the actual length of the column
definition. This issue has been resolved.
In the XML file reader, you could not specify a network path
in the file name. This issue has been resolved.
ADAPT00777109
ADAPT00779045
ADAPT00784324
74BusinessObjects Data Integrator Release Notes
When running a Data Integrator job that contained nested
schemas, the job sometimes consumed all of the available
memory and stopped execution abnormally. This issue has
been resolved.
When compacting a Data Integrator repository, you could
get a Unique constraint error from the database. This
issue has been resolved.
Using Auto Documentation to display a file format that contained non-printable characters in the row marker invoked
the java.lang.NullPointerException error. This issue
has been resolved by printing a space in place of the nonprintable character.
Resolved issues
7
ADAPT00786004
ADAPT00788762
ADAPT00793560
ADAPT00796163
ADAPT00797003
In this release, Data Integrator supports Teradata V2R6.2
via TTU8.2 clients. This support is provided on the Windows
and Linux platforms only.
When using aliases in Data Integrator datastores and creating a Business Objects Universe with Data Integrator
metadata, information on Data Integrator objects did not
display properly in the Business Objects Designer. This issue has been resolved.
When using the float data type, Data Integrator rounded the
data at approximately one-tenth precision. This problem
has been addressed in this release. Note that the float data
type is a imprecise data type, which means rounding the
data occurs if the value has a large precision. Business
Objects suggest using a different data type, such as decimal,
if your application requires a consistent numeric result.
As of this release, Data Integrator Metadata Integrator can
retrieve metadata from CMS systems in SSL mode.
When loading into an Oracle table that contained multiple
CLOB columns, if the data being loaded to the column
contained data larger than 4,000 bytes and if another CLOB
column contained NULL, Data Integrator invoked the error
String literal too long. This issue has been resolved.
ADAPT00799347
ADAPT00802253
When using the Table Comparison transform in sorted input
mode, if the input data set is considerably smaller than the
compare table data set, the transform could take extra time
to complete after the input rows had been exhausted. This
issue has been resolved.
When reading level 3 of an XML file source, an Access
Violation error occurs. This issue has been resolved.
BusinessObjects Data Integrator Release Notes75
Resolved issues
7
ADAPT00804492
ADAPT00804532
ADAPT00806765
ADAPT00816263
ADAPT00816382
The Display Optimized SQL functionality sometimes
caused the Designer to crash. This bug is due to an implicit
data type conversion in the WHERE clause in the Query
transform when the conversion applied to a non-column
reference. The ordering of the WHERE clause could also
be a factor. Running a job with this scenario icould terminate
due to error <170101>. This issue has been resolved in this
release.
In a data flow that joins multiple R/3 data flows, the job could
fail. This problem has been resolved.
When using the Salesforce.com adapter, if using the
DI_PICKLIST_VALUE virtual table and the size of the pick
list is 100 or greater, the job failed with the error message
Error reading from <AIView1>: <Unknown object
type. Salesforce.com message is Object type
cannot be null>. This problem has been resolved, and
now Data Integrator can handle batches of any size.
The database reader leaked memory if the reader contained
varchar columns and the native data type of the column was
non-varchar (such as nvarchar, nchar, graphic, etc.). This
issue has been addressed.
Data Integrator did not recognize the computational field
value correctly for some Cobol compilers. This issue has
been fixed.
ADAPT00818697
ADAPT00819587
76BusinessObjects Data Integrator Release Notes
When running Data Integrator jobs on the Solaris platform,
the parent engine froze intermittently in some environments.
This issue was caused by a problem in the process forking
mechanism from the operating system. This issue has been
resolved by using a different process forking mechanism
on the Solaris platform.
Using multithreaded processing for incoming requests, each
Data Integrator Job Server can now accommodate up to 50
Designer clients simultaneously with no compromise in response time. For more information about this feature, see
Job Server enhancements on page 41.
Resolved issues
7
ADAPT00820475
When calling an SAP function, the AL_RFC_RETCODE
parameter did not return correctly if the Data Integrator engine was running in UTF-8 mode. This issue has been resolved.
BusinessObjects Data Integrator Release Notes77
Resolved issues
7
ADAPT00820896
As of Data Integrator 11.7.2.3 there are two changes in behavior from prior versions:
•If you upgrade a repository from a version prior to
11.7.0.0, all of the data flows will have a cache type of
pageable. Previously the cache type was in-memory.
•When running jobs in Collect statistics for optimization
mode, the cache type for the data flows was always
pageable. Previously, Data Integrator honored the cache
type setting at the individual data flow level. This change
in behavior was made to ensure that the job will run to
completion regardless of the amount of data being processed.
To ensure that Data Integrator uses the optimal cache type
when running the jobs in a production environment, Business
Objects strongly suggests that you run jobs with the option
Collect statistics for optimization set at least once with
a data volume similar to the production environment. Once
Data Integrator has the statistics on the job memory usage,
it will use the collected information to determine the best
cache type to use (pageable or in-memory).
To collect statistics,you can enable theCollect statisticsfor optimization option when running individual jobs. Alternatively, you can enable a global flag to always collect
statistics for optimization for all jobs regardless of the job
level setting, which is a new feature in 11.7.2.3. After the
statistics have been collected for all jobs, disabling the same
global flag ensures that the job level settings are again
honored.
To enable this global flag, in the Designer, select Tools >
Options > Job Job Server > General. Enter engine in the
section field, enter collect_statistics_for_optimiza
tion for the key, and enter TRUE for the value.
To disable the global flag in order to honor the individual
job level settings, set the value of collect_statis
tics_for_optimization to FALSE.
For proper maintenance of your jobs, Business Objects
recommends that you collect statistics for your jobs whenever there is a significant change in data volume to ensure
that Data Integrator continues to make the right cache decision when running jobs.
78BusinessObjects Data Integrator Release Notes
Resolved issues
7
ADAPT00823175
ADAPT00823263
ADAPT00823272
ADAPT00823545
ADAPT00823899
ADAPT00823904
When dragging Data Quality transforms into Data Integrator
Designer and the Job Server is on the HP-UX platform, you
might get the error BODI-1112436. This issue has been
resolved.
The Job Server could not start when the Data Integrator
was installed with a Spanish code page and if the Windows
regional setting was also Spanish. This issue has been addressed.
The sql() function call used with DB2 datastore resulted in
a small memory leak. Depending on where the function call
was used, the memory leak could compound and cause the
memory usage of the job to increase. This issue has been
resolved.
When Japanese characters were used in an Annotation and
the data flow was exported and imported in XML mode, the
Japanese characters did not display properly. This issue
has been resolved.
Creating a Data Integrator repository on a MySQL database
sometimes resulted in a syntax error. This issue has been
resolved.
When using table names containing the $ character in a
nested lookup_ext function, Data Integrator could not validate the function after restarting the Designer. This issue
has been addressed.
ADAPT00823958
Under certain circumstances, the repository displayed twice
in the Server Manager. This issue has been resolved.
BusinessObjects Data Integrator Release Notes79
Resolved issues
7
ADAPT00824010
ADAPT00829000
ADAPT00829117
ADAPT00829859
This release of Data Integrator supports Netezza ODBC
driver 3.1.4 on all platforms. Note that this feature was
available in Windows and Linux platforms as of 11.7.2.3.
Note that due to a change in Netezza's ODBC driver, by
default Netezza's internal log files for bulk loading will be
written to Data Integrator's root directory. You may delete
these temporary log files upon job completion. If you want
to store the temporary Netezza log files in a different location
for easier management, specify the desired log file directory
location in the Netezza ODBC driver.
In addition, grant the CREATE EXTERNAL TABLE privilege
to the Netezza user, which is necessary to use Netezza table
loaders.
The Convert R/3 null to null option in the reader did not
treat NULLs or spaces properly. This issue has been resolved.
The error Job server not part of server group
displayed if you changed the repository password through
the Management Console. This issue has been resolved.
The loading performance to MySQL tables has been improved in this release. The insert/update/delete statements
of the MySQL loader are now parameterized, which allows
Data Integrator to submit batches of data to the database
for faster processing on the database side. This enhancement applies to straight INSERT/UPDATE/DELETE rows
to the MySQL target table.
ADAPT00832129
ADAPT00843602
ADAPT00847284
80BusinessObjects Data Integrator Release Notes
When importing Sybase ASE tables containing date or time
data type columns, those columns were not imported into
Data Integrator. This issue has been resolved.
Two functions have been added to this release of Data Integrator: base64_encode and base64_decode. For details on
these functions, see Inxight integration for reading unstruc-
tured text on page 38.
When joining two different data sources and the function
gen_row_num is used in the Query transform, the row
generation might not generate the correct result. This issue
has been addressed.
Resolved issues
7
ADAPT00847565
ADAPT00852830
ADAPT00853579
ADAPT00853583
When loading to an Oracle partitioned table using the Oracle
bulk loader in API mode with enabling partition, the index
of the table is now automatically maintained by Data Integrator within a loading transaction by default. If you specified
not to maintain the index via the OracleBlkldrAPIParallelLoadEnableIndexMaint parameter, your index maintenance
preference will continued to be honored. However, if the
option has not been set, the behavior of this release has
been changed to automatically maintain the index. This new
behavior could have a performance implication. However,
this new behavior safeguards the Oracle table indexes from
being corrupted.
This release includes the new database type HP Neoview
version 2.2. This support is available on Windows and Linux
platforms only. In future releases, the HP Itanium 64-bit
support will also be added.
When logging in to an instance of Data Integrator that has
connections to multiple repositories and each is associated
with a different central repository, the central repositories
did not get automatically activated when logging into the
local repositories. This issue has been addressed.
When installing Data Integrator, if the Management Console
was installed without the Data Integrator Designer component, Auto Documentation did not work properly. This issue
has been addressed.
ADAPT00853591
ADAPT00855478
ADAPT00855856
In this release, Data Integrator has been certified to work
with Data Quality 11.7. This certification is available on all
platforms.
Data Integrator sometimes extracted the data from Oracle
CDC datastore tables in the incorrect sequence. This problem has been resolved.
When two repositories are associated with a Job Server,
the default repository option was not editable from the
Server Manager. This issue has been resolved.
BusinessObjects Data Integrator Release Notes81
Resolved issues
7
ADAPT00856745
ADAPT00858554
ADAPT00866776
ADAPT00867379
ADAPT00868134
ADAPT00871130
When defining a WHERE clause in a Query transform using
the LIKE operator with a constant such as %MONDAY%, rows
containing only blank characters were incorrectly matched.
This issue has been resolved.
When loading to a Microsoft SQLServer table containing a
column of text data type, the Data Integrator job failed. This
issue has been resolved.
In the XML pipeline transform, the UNC path name can now
be used.
The Data Integrator Web Service message was missing the
/localtypes: tag in the response SOAP message body
for batch jobs. This issue has been resolved.
When using the Netezza bulk-loading option on any UNIX
or Linux platform, the bulk loader could fail with a SQL
syntax error if the schema of the target table was large.
Data Integrator generates an INSERT...SELECT SQL
statement as part of the bulk-loading process. If the length
of this statement was larger than 1024 bytes, the generated
SQL might be corrupted with extra spaces at every 1024
bytes. This problem has been resolved.
When using the wildcard character to read multiple files
from the same file reader, Data Integrator failed to read the
files that have a header row only. This problem has been
resolved.
ADAPT00871857
ADAPT00874472
82BusinessObjects Data Integrator Release Notes
If you checked out an object such as a data flow from the
central repository and one of its dependent objects was already checked out by another user to a different local
repository, the check-out operation of object failed. This issue has been resolved.
When using Table Comparison with the Sorted input option,
Data Integrator compared the sort ordering of UTF-16
characters incorrectly, which sometimes resulted in incorrect
output. This problem has been resolved.
Known issues
8
Known issues
8
This section lists known issues in ascending order by Business Objects
ADAPT system Problem Report tracking number.
ADAPT00619350
ADAPT00619427
ADAPT00619479
Data Integrator always works in binary collation for the Table
Comparison transform. If the comparison table sort order is
not binary, Data Integrator will generate inconsistent results
between cached, row-by-row, and sorted comparison methods.
For Microsoft SQL Server datastores, if the table owner name
is DBO and the CMS connection is also Microsoft SQL
Server with a login user name of sa, the updated Universe
will determine the new table schema to be new because
DBO is compared to sa as owner name.
Do not use Microsoft SQL Server as CMS connection with
sa as the login name.
Due to a known Informix issue, two phase commit (XA)
transactions in IBM® Informix® Dynamic Server could cause
Data Integrator to fail when loading the data to an IBM In
formix Dynamic Server or profiling data with Informix. The
typical Informix errors are:
Could not position within a table
and
Could not do a physical-order read to fetch
next row
The reference number from the IBM site is Reference#
1172548
To correctly configure metadata Integrator for a BusinessObjects Enterprise XI Release 2 CMS instance that is not on
the same computer as Data Integrator, refer to the Business
Objects Customer Support Web site and locate Knowledge
Base article c2019073.
Known issues
8
ADAPT00620063
ADAPT00620112
ADAPT00620168
ADAPT00620210
When comparison tables are on Microsoft SQL Server, the
Table Comparison transform's Sorted input option generates
incorrect results. The Data Integrator engine runs in binary
collation while the Microsoft SQL Server database runs in a
Latin1 collation sequence. As a result, Microsoft SQL Server
generates the data in a different sort order. This issue will
be addressed in a later release.
Use the Row-by-row select or Cached comparison table
option in the Table Comparison transform.
When you import a table by name and this table is the target
table of a Table Comparison transform, the table comparison
does not refresh the database changes.
Open the Table Comparison transform to refresh the output
schema.
When adding a very large project to a central repository, a
large memory leak occurs. After adding large projects to a
central repository, restart the Designer.
Due to an issue in Attunity software, when using Attunity
Connect to load into a DB2 table on MVS that contains varchar columns larger than 255, the data inserted into those
columns is always an empty string regardless of what you
actually insert.
ADAPT00620227
ADAPT00620654
ADAPT00620815
When using Attunity CDC, import from browser and import
by name behave differently. For example, when you type
either CDC_DSNX4612:PUBLIC or CDC_DSNX4612 as the
owner name, the same table will be imported into the
repository.
When you import a table by name, always use the full owner
name such as CDC_DSNX4612:PUBLIC and also define the
alias for this full name.
The Import By Name functionality for the Web Services
adapter is not working. The work-around is to use metadata
Explorer (browser) to import any metadata from the Web
Services adapter.
Changed-data capture for Oracle 10g R2 does not support
multibyte table or column names due to an Oracle limitation.
BusinessObjects Data Integrator Release Notes85
Known issues
8
ADAPT00620821
ADAPT00621063
ADAPT00644390
ADAPT00655863
ADAPT00658811
ADAPT00661876
Windows NT authentication for Microsoft SQL Server doesn't
work for the Administrator and metadata Reports. Use Microsoft SQL Server authentication instead.
When using the Get by label option of the multiuser development interface with a non-existent label, Data Integrator
returns the latest version of the objects.
Auto-correct loading might fail for Microsoft SQL Server,
ODBC, DB2, Informix, Sybase IQ, or Teradata databases if
the data contains NULL termination characters at the end of
varchar data. The auto-correct SQL does not generate
properly. To work around this issue, use the rtrim_blanks_ext
functions on columns that you know will contain NULL termination characters to trim the blanks.
This issue also occurs with Sybase and Sybase IQ targets
in regular load mode. Use the rtrim_blanks_ext to work
around this issue for those databases as well.
Data Integrator Web Services only supports the UTF-8 code
page.
When creating a Data Integrator repository on IBM DB2,
only the UTF-8 code page is supported.
A Data Quality project cannot be imported if the project path
name is longer than 230 characters.
ADAPT00673438
ADAPT00688056
ADAPT00688210
ADAPT00688976
86BusinessObjects Data Integrator Release Notes
Data Quality Project descriptions are not imported into Data
Integrator Object Library.
Stopping a Data Integrator job will not stop the Data Quality
process.
Data Integrator should not import a project with more than
one socket writer.
If the Web Service Adapter does not work due to an incorrect
keystore path, you must restart the Tomcat server after you
specify the correct keystore path.
Known issues
8
ADAPT00695056
ADAPT00733805
ADAPT00736500
The Adapter SDK has the following syntax limitation for DTDs
that can be imported for use as an adapter source or target.
Conditional syntax cannot be included in the DTD file.
Sample syntax that fails:
<!-- Common ebXML header--><!ENTITY % IN
CLUDE_EBHEADER "INCLUDE"><![ %INCLUDE_EBHEAD
ER;[<!ENTITY % eb:MessageHeader SYSTEM
"ebHeader.dtd"> %eb:MessageHeader; ]]
The following syntax is supported and produces the same
result:
<!ENTITY % eb:MessageHeader SYSTEM "ebHead
er.dtd"> %eb:MessageHeader;
This limitation only applies to DTD files read by an adapter,
not those read by the Designer. It is important to note that
the only file the adapter reads is the one configured in the
adapter operation. Adapters do not read embedded DTD
files.
If the input to the Data Quality transform is not mapped and
the output of the transform is mapped, Data Integrator will
hang when the Job is executed. Please ensure that the
proper input mapping is done before running the Job.
Under the following circumstances, the Profiler may stay in
the pending state indefinitely:
•The repository is not connected to a job server.
•The associated job server is not running. Please check
the above before running the Profiler again.
ADAPT00738936
There is a known issue with Data Quality projects that contain
periods (".") in the field names. Although the projects can be
imported from the Data Quality server, Data Integrator cannot
correctly map these fields. The project can be used in a data
flow as long as the fields containing periods are not used. A
workaround for this problem is to rename any Data Quality
project fields to use another character instead of the period.
Business Objects suggest using the underscore character
("_").
BusinessObjects Data Integrator Release Notes87
Known issues
8
ADAPT00741303
ADAPT00743879
ADAPT00794024
The Message Client sample (client_sample) program in
the AccessServer directory has not been updated for this
release. Please check the Business Objects Knowledge
Base for any updates.
Data Integrator does not correctly handle Oracle UTF8
database the NCHAR data type in a table comparison rowby-row select option if the column is used in the primary key
or compared column. In such a case, Data Integrator generates an incorrect SQL SELECT statement. As a result, the
table comparison result is not correct.
When using Data Integrator to load to a Teradata V2R6.2
target table via the ODBC driver, you will receive the following
error:
error message for operation <SQLExecute>:
<[NCR][ODBC
Teradata][Teradata Database] The source parcel
length does not
match data that was defined.
This issue is due to a bug in the Teradata ODBC driver. To
work around this issue, either set the commit size of the
target table to 1, or use one of the bulk-loader methods to
load your data
88BusinessObjects Data Integrator Release Notes
Copyright information
9
Copyright information
9
SNMP copyright information
This section includes third-party copyright information.
SNMP copyright information
Portions Copyright 1989, 1991, 1992 by Carnegie Mellon University
Portions Derivative Work - 1996, 1998-2000
Portions Copyright 1996, 1998-2000 The Regents of the University of
California
All Rights Reserved
Permission to use, copy, modify and distribute this software and its
documentation for any purpose and without fee is hereby granted, provided
that the above copyright notice appears in all copies and that both that
copyright notice and this permission notice appear in supporting
documentation, and that the name of CMU and The Regents of the University
of California not be used in advertising or publicity pertaining to distribution
of the software without specific written permission.
CMU AND THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS. IN NO EVENT SHALL CMU OR THE REGENTS OF THE
UNIVERSITY OF CALIFORNIA BE LIABLE FOR ANY SPECIAL, INDIRECT
OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM THE LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
90BusinessObjects Data Integrator Release Notes
Copyright information
SNMP copyright information
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
Neither the name of the NAI Labs nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANT ABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
9
Portions of this code are copyright (c) 2001, Cambridge Broadband Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
The name of Cambridge Broadband Ltd. may not be used to endorse or
promote products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER "AS IS''
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANT ABILITY AND
BusinessObjects Data Integrator Release Notes91
Copyright information
9
IBM ICU copyright information
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Specifications subject to change without notice. Not responsible for errors
or admissions.
IBM ICU copyright information
Copyright (c) 1995-2003 International Business Machines Corporation and
others
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, and/or sell copies of the
Software, and to permit persons to whom the Software is furnished to do so,
provided that the above copyright notice(s) and this permission notice appear
in all copies of the Software and that both the above copyright notice(s) and
this permission notice appear in supporting documentation.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO
EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN
THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT
OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER
RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
92BusinessObjects Data Integrator Release Notes
Copyright information
IBM ICU copyright information
Except as contained in this notice, the name of a copyright holder shall not
be used in advertising or otherwise to promote the sale, use or other dealings
in this Software without prior written authorization of the copyright holder.
9
BusinessObjects Data Integrator Release Notes93
Copyright information
IBM ICU copyright information
9
94BusinessObjects Data Integrator Release Notes
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.