BusinessObjects™ Data Integrator version 6.5.0.1 is the
all-platform release of Data Integrator version 6.5. The 6.5.0.1
release also includes new features.
For an overview of new features included in version 6.5.0.1, see
the Data Integrator Release Summary. For details about using 6.5
features (released January 30, 2004), see the Data Integrator
Tech nica l Man uals.
These release notes contain the following sections:
•Compatibility and limitations update
•Resolved issues
•Known issues
•Migration considerations
•Documentation corrections
•Product end-of-life
•Documentation bill of materials
•Copyright information
To review Data Integrator documentation for previous releases
(including Release Summaries and Release Notes), visit the
documentation page at Business Objects Customer Support
Online.
To access this Web site, you must have a valid user name and
password. To obtain your user name and password, go to
Business Objects Online Customer Support and register.
Business Objects, Americas
3030 Orchard Parkway, San Jose, California 95134, USA
1-408-230-4200 fax: 1-650-230-4201 info@businessobjects.com www.businessobjects.com
Document Number DI-65-1100-002
March 31, 2004
Data Integrator Release Notes
Compatibility and limitations update
Compatibility and limitations update
This section describes changes in compatibility between Data
Integrator and other applications. It also summarizes limitations
in this version of Data Integrator.
•Data Integrator database interface for Teradata V2R5.1 (as a
source or target) with Teradata Tools and Utilities version 7.1
•Data Integrator batch interface for PeopleSoft PeopleTools
versions 8.1 and 8.4 (in addition to versions 6.0, 7.5, and
8.0)
•Data Integrator batch interface for Siebel 7.5.2
•Data Integrator mainframe interface using DETAIL 5.1
Service Patch 02.
•Windows 2003 (check with your database provider for
compatible database versions)
•Informix Dynamic Server version 9.4 on UNIX (requires
DataDirect Technologies ODBC Connect version 4.2 Service
Patch 1)
•Data Integrator now uses Apache Axis 1.1 for its Web services
support. Axis 1.1 provides support for WSDL 1.1 and SOAP
1.1. with the exception of the Axis servlet engine.
2 Data Integrator Release Notes
Data Integrator Release Notes
Compatibility and limitations update
Data Integrator version 6.5.0.1 does not support:
•AIX 4.3.3
•DataDirect mainframe interface using DETAIL 5.0
•DB2 7.1
•Netscape 6.2x
•Oracle 9.0.1
•Solaris 2.7
•Sybase ASE 11.9
Limitations
These limitations apply to Data Integrator version 6.5.0.1:
•Multiple datastore profiles are not supported with:
!LiveLoad
!Adapters
!J.D. Edwards OneWorld and World
!DETAIL
!PeopleSoft sources on Microsoft SQL Server
•PeopleSoft 8 support is implemented for Oracle only.
Data Integrator jobs that ran against previous versions of
PeopleSoft are not guaranteed to work with PeopleSoft 8.
You must update the jobs to reflect metadata or schema
differences between PeopleSoft 8 and previous versions.
•Stored procedure support is implemented for DB2, Oracle,
MS SQL Server, Sybase, and ODBC only.
•Teradata support is implemented for Windows, Solaris, and
AIX. HPUX is not supported due to limitations in Teradata’s
HPUX product.
•Bulk loading data to DB2 databases running on AS/400 or
MVS systems is not supported.
Data Integrator Release Notes 3
Data Integrator Release Notes
Compatibility and limitations update
•Data Integrator Administrator can be used on Microsoft
Internet Explorer version 5.5 SP2 or 6.0 SP1 and Netscape
Navigator version 7.0.2. Earlier browser versions may not
support all of the Administrator functionality.
•On UNIX, Data Integrator version 6.5.0.1 requires a JDK 1.3
or higher compatible version.
•DETAIL support of NRDB2 is no longer available for new
datastores. While existing NRDB2 datastores and tables are
still supported, new NRDB2 datastores cannot be created.
It is recommended that existing NRDB2 jobs be migrated to
NRDB, which provides the same functionality. To do this,
create an NRDB datastore and import the tables you have
been accessing using the NRDB2 datastore. Add new sources
to your current NRDB2 jobs and delete the sources from
those jobs that use the old NRDB2 datastore.
•Data Integrator’s View Data feature is not supported for JD
Edwards or SAP R/3 IDocs. For SAP R/3 and PeopleSoft, the
Ta b le P r o f i l e tab and Column Profile tab options are not
supported for hierarchies.
•The LiveLoad feature no longer includes the ability to save
data from a master database to then be applied (reconciled)
to a mirror database. Only the switching capability from
"live" to "load" is supported. Detecting and applying
differences in content between master and mirror databases
is no longer supported. To obtain the same results achieved
in previous releases, load each database starting from the last
time it was loaded. For example, with LiveLoad databases
that are loaded and switched daily, every load would move
two days worth of data in each run.
•Data Integrator’s internal Diff utility (used to compare local
repositories with the central repository) is being re-designed
to address some open issues.
4 Data Integrator Release Notes
Data Integrator Release Notes
Compatibility and limitations update
•Support for the LOOKUP_EXT function is provided for
sources as follows:
•DB2 Java Library limitations
The Web administrator will not work with a DB2 repository
under any of the following conditions:
Detail DB2
Detail NRDB
IDOC
JDE
PeopleSoft
SAP BW
SAP R/3
Siebel
XML
!db2java library is incompatible with DB2 client. For
example, DB2 Client is version 8.1 and db2java library is
version 7.2, or
!db2java library is not generated for JDBC 2 driver, or
!the java class path variable is not properly set to the
corresponding java library , or
!the DB2 JDBC shared library (e.g. libdb2jdbc.so on AIX)
is not compatible with the Java JDBC 2 driver
Under these conditions, you will see the following behavior:
!The Administrator stops working or crashes after you
configure the DB2 repository. Find any error and warning
messages related to the DB2 repository configuration in
the log file.
!When testing your DB2 connection from Administration
-> Repository -> Specify DB2 database type, and clicking
[Test], the following errors appear:
BODI-3016409: fail to connect to repository.
Data Integrator Release Notes 5
Data Integrator Release Notes
Compatibility and limitations update
BODI-3013014: didn't load DB2 database driver.
In this release, the db2java.zip JDBC 2 driver version of DB2
7.2 and DB2 8.1 are provided. On windows, the different
version of these libraries can be found in $LINK_DIR/ext/lib
db2java.zip (by default) DB2 7.2 version
db2java_8.1.zip DB2 8.1 version
By default, the db2java.zip of version 7.2 will be used. If you
run with a DB2 8.1 Client, you must:
!replace with version 8.1 of db2java.zip
!make sure the compatible DB2 JDBC shared library is in
the shared library path
!restart the web server service on Windows or restart the
job service on UNIX
If you run with other DB2 Client versions, you must obtain
the corresponding java library with the JDBC 2 driver. Please
refer to IBM DB2 documentation for how to obtain the
correct java library.
6 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
Resolved issues
These notes list resolved issues in descending order by Business
Objects tracking number.
NOTE: For information on issues resolved in prior releases, refer
to the documentation at Business Objects Customer Support
Online.
27919If a variable of type datetime or time was used in the Post Load
Commands
printed out 9 (default) digits for the sub-second value. Teradata,
however, can only handle 6 digits. With this fix, Data Integrator
writes out 6 digits for the sub-second value.
27906The Designer will now validate the required fields in the Bulk
Loader Options
27904The bigint data type is now mapped to double instead of int in
Data Integrator.
tab of a Teradata target table editor, Data Integrator
tab for a Teradata target in the target table editor.
27883For Teradata sources, when using a pushdown_sql() function in
the WHERE clause of a query, an access violation occurred. This
issue is fixed.
27843Designer crashed with an unknown exception when an
unrecognized column data type was part of an index, primary, or
foreign key. This issue is fixed.
27841Data Integrator does not support binary data types for a
Microsoft SQL Server database. If a table has a column with a
binary data type, Data Integrator omits the column during
import. In previous releases, if a table had a binary data type for
a column that was also in an index, the import failed with an
Unknown exception. In this release, the column will be omitted.
27830In the monitor log editor, sorting was done by character rather
than numeric order. This issue is fixed.
Data Integrator Release Notes 7
Data Integrator Release Notes
Resolved issues
27820Data Integrator generated the wrong SQL for an outer join if its
WHERE clause contained Data Integrator functions (for example
LOOKUP). This issue is fixed.
27713See Documentation Corrections section under, “Supplement for
J.D. Edwards” on page 74.
27678If you created a central repository but entered connection
information for a current local repository, you would see a
warning message and a bit later the Designer would crash. This
problem has been fixed.
27674When using Teradata bulk loading, if a Data Integrator date
column was mapped to a Teradata table’s
output data file was different from that produced by a Data
Integrator
timestamp column. This inconsistent behavior caused issues
when creating load scripts. This problem has been fixed.
27646An incorrect SQL statement was pushed down to the database
when tables came from different datastores if a data flow
included a join between tables and non-table sources (for
example a query or merge transform). In this scenario, Data
Integrator should not have pushed down operations because
more than one datastore was used. This issue is fixed.
timestamp column, the
datetime data type column mapped to a Teradata
27645Not all rows were inserted into a Sybase database when some of
the batch loads had errors. This issue is fixed.
27590See 27503.
27589When there were joins using columns from MSSQL, DB2,
Informix, or ODBC source tables, access violations occurred if
the source tables resided in different datastores. This issue is
fixed.
27566When a schedule was deleted from the Administrator, Data
Integrator failed to delete its job launcher .bat file as well. This
issue is fixed.
27562When loading to a Teradata table, Data Integrator acted as if a
job completed successfully even if an error occurred. With this
fix, the job terminates with an error if an error occurs.
27543When the Data Integrator engine processed a comma as a
decimal separator and it was binding the decimal columns using
an ODBC / MS-SQL connection, it failed with an "Invalid
character value for cast specification" error. This problem was
more prevalent when running the Table Comparison transform
and a LOOKUP function. This issue is fixed.
27509Data Integrator raised an error when validating Teradata tables if
one of the following fields was not specified for the Teradata
bulk loader:
•For the Load Utility —
•For Warehouse builder — Log Directory.
This issue is fixed
Command Line and Data File Name
27503Previously, if a job contained an aggregate function without a
group by, the engine crashed with an access violation. This issue
is fixed.
27491At startup, the AL_Jobservice core dumped when starting an
SNMP agent. This issue is fixed.
27487When using the load utility for Teradata bulk loading, if you did
not specify directory information for the data file using the target
editor or the datastore editor, Data Integrator created the data
file in LINK_DIR\bin instead of the default
LINK_DIR\Log\BulkLoad directory. This issue is fixed.
27484See 26221.
27475Data Integrator did not generate an ABAP expression
dynamically. If you switched from SAP version 4.7 to 4.6 using
paths you specified using the Datastore editor, Data Integrator
generated the ABAP for 4.7 rather than 4.6. This issue is fixed.
Data Integrator Release Notes 9
Data Integrator Release Notes
Resolved issues
27436A job execution under heavy stress-testing would randomly
generate an access violation error related to variable
initialization in internal Data Integrator processes. This is fixed.
27432The Designers’ context-sensitive help, covers Data Integrator
version 6.5.
27412Multi-user labeling issues have been addressed in this release.
27407When using the Check out objects without replacement option
of a central repository, objects became defunct if they did not
exist in the local repository. This issue is fixed.
27399In the object library, an object’s where used count would not
display correctly after import or multi-user (central repository)
operations. This issue is fixed.
27394When reading from an SAP R/3 file format in an SAP R/3 data
flow, an ABAP syntax error was raised. This issue is fixed.
27383When a job failed, the error and trace logs could be accessed in
the Administrator when it was running on UNIX, but the
monitor log could not be opened. This issue is fixed.
27381Previously the Administrator would allocate four persistent
connections to a repository when each user logged in. This could
consume significant resources when there were multiple users
and potentially slow down the system.
Beginning with this release, the default number of Administrator
repository connections per user (four) can be over-ridden at
Administrator startup. To override the default value, before
starting the Administrator:
•For Windows, modify the wrapper cmd_line section of
LINK_DIR/ext/WebServer/conf by adding
DCNX_POOL_LIMIT:
wrapper.cmd_line=$(wrapper.javabin)
-Dcatalina.home=$(wrapper.tomcat_home)
-DLINK_DIR=$(ACTAHOME) -DCNX_POOL_LIMIT=1
-classpath $(wrapper.class_path)
10 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
•For UNIX, modify the catalina.sh script found in
LINK_DIR/ext/WebServer/bin by adding
-DCNX_POOL_LIMIT=1 in the 'start' section (not the
security section) as follows:
if [ "$1" = "start" ] ; then
echo "Using Security Manager"
else
"$_RUNJAVA" $JAVA_OPTS $CATALINA_OPTS \
-DCNX_POOL_LIMIT="1" \
-Djava.endorsed.dirs="$JAVA_ENDORSED_DIRS"
-classpath "$CLASSPATH" \
27358See 27110.
if [ "$1" = "-security" ] ; then
...
27345You may now launch Metadata Reports from the Windows Start
menu.
27322If in and out parameters were not used in the return values of a
custom function call, then they were not updated in a transform.
This problem is fixed.
27304In previous releases, a job execution would continue after an
exception was caught within a complex job (a job with multiple
levels of parallel and serial work flows and data flows). With this
fix, the job execution will stop.
27301An access violation occurred when loading more than 200 XML
records from a database using an adapter, which called a
real-time job. This issue is fixed.
27299See 27272.
Data Integrator Release Notes11
Data Integrator Release Notes
Resolved issues
27290When a Data Integrator job with a varchar column in its
Oracle target table was run by a Job Server on Solaris, the job
occasionally failed with one of the following errors:
• ORA-24307: invalid length for piece
• ORA-03127: no new operations allowed until the active operation ends.
This problem has been fixed.
27285When using Sybase 12.5 on Solaris, you Data Integrator was
unable to connect to a repository due to a library load error. This
issue is fixed.
27272In previous releases, an access violation would occur if a script
function call included un-initialized global variables. This issue
is fixed.
27256When PeopleSoft version 8.x ran on an Oracle 9.x installation,
there were issues with non-English locales. This problem has
been fixed.
27233This fix provides a code-level work around to a deadlock issue
with Microsoft kernels spawning parallel flows.
27165Jobs with complex data flows and multiple joins could cause an
access violation. This issue is fixed.
27162In this release, a new option is available for all database targets
in the
table editor. If you select the
Integrator updates key column values when it loads data to the
target.
Update control section of the Options tab of the target
27124When loading to Informix, jobs could terminate if the Number
of loaders
was more than 1. This issue is fixed.
12 Data Integrator Release Notes
Update Key Columns option, Data
specified in the Options tab of the target table editor
Data Integrator Release Notes
Resolved issues
27110Global variables become parameters to embedded data flows,
when
containing an embedded data flow. In previous releases, this
generated incorrect language when you saved the data flow to
the local repository. This issue is fixed.
27107Informix is now supported on UNIX with ODBC Data Direct
connectivity.
27084Jobs would fail when run using the View Data feature, but would
succeed when run without it. This bug is related to an issue
where the same data flow was called twice in a job but ran in
single-process mode. This is fixed in 6.5 and cannot be back
ported.
27082The LOOKUP function in a nested (NRDM) structure produced
unpredictable results. This issue is fixed.
27081Executing jobs that included a large number of global variables
without default values caused the Job Server to fail after a few
executions. This issue is fixed.
Validation > Display Language is executed on a data flow
27077In version 6.1 of Data Integrator, there was a problem executing
a .bat file using the built-in EXEC() function when running on
Windows NT. On Windows 2000 and Windows XP a command
interpreter was automatically spawned to interpret the .bat
script, while on Windows NT this did not occur. With this fix,
Data Integrator explicitly spawns the command interpreter when
executing a .bat file on all Windows platforms.
27071On Solaris, a simple data flow involving updating the rows in an
Oracle table was crashing due to a timing problem. This issue is
fixed.
27065A job with a Hierarchy_Flattening transform failed with an
access violation. This issue is fixed.
27062Entries in the AL_SETOPTIONS table are no longer truncated at
255 characters.
Data Integrator Release Notes13
Data Integrator Release Notes
Resolved issues
27056In previous releases, if the Enable parallel execution option of an
adapter function was selected, Data Integrator would not use
parallel execution for queries that called this function. This issue
is fixed. The parallel execution is controlled by the
Parallelism
27041The Web Services Adapter could not load metadata from a UNIX
Job Server because the adapter could not find the Job Server.
This issue is fixed.
27032Could not import an IDoc from an SAP R/3 4.0B system. With
this fix, Data Integrator will try to import an IDoc using SAP
IDOCTYPE_READ_COMPLETE function. If this function is not
available, then Data Integrator will use the standard SAP R/3 3.x
approach to import the IDoc.
27030On a work flow’s Properties page, the Recovery as a unit option
randomly became deselected if you saved or closed the Designer.
This issue is fixed.
Degree of
value for the data flow.
27016If local variables were passed as arguments to input or output
parameters of a script function, the input and output parameters
would not retain their values during consecutive calls to the
function. If multiple global variables were passed as arguments,
only some of the input and output parameters would retain their
values during consecutive calls to the function. These issues are
fixed.
27013The View Data feature did not work properly for SAP R/3 version
3.1H for hierarchies. This issue is fixed.
27004View Data for SAP hierarchies gave an error message "SAP R/3
table reader error in srcDB_getnext." This issue is fixed.
27003When the Hierarchy_Flattening transform used horizontal
flattening, Data Integrator ignored standalone nodes. No output
rows are produced for these nodes.
There are three kinds of standalone nodes:
•Parent without child
14 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
•Child without parent
•Parent and child are the same. In this case, the input row
contains the same value in both the child and parent id
columns
With the fix, Data Integrator produces output rows for
standalone nodes as described below:
Contains node id
0
Contains node id
Attributes obtained from input rows
+ The rest of the columns
are NULL
26926Subsequent RFC exception fields retained the previously
returned value. It should have been reset. This fix resets the
exception field before each read.
Data Integrator Release Notes15
Data Integrator Release Notes
Resolved issues
26898While running parallel bulk loaders, a core dump occurred. This
problem is fixed.
26877The Export option did not change the owner of the translation
table of the LOOKUP function. This problem is fixed.
26833Copying multiple lines from the Log window to the clipboard
would cause the Designer to crash with an unknown exception.
This problem has been fixed.
26830In previous releases, when you put the same work flow in a job
more than once, and it was marked to Execute only once, and it
took a given variable value, the second time the work flow was
processed (although it did not run) it passed the same variable
value that it took when it ran the first time (regardless of
whether other objects before the second work flow set a different
variable value). In this release, after the first Execute only once
work flow runs, all copies of it in the job do not run and do not
affect variable values in the remainder of the job.
26828When a global variable was passed to a function, the calculate
column mappings operation failed with the following error even
if the job did not have any validation errors: “Error calculating
column mappings for data flow <colmap1>. Please validate the
data flow, fix any errors, and retry the column mapping. If the
problem persists, please notify Customer Support.” This issue is
fixed.
26791In this release the Designer and Job Server listen on all the
available network addresses. In other words if you install the
Designer or Job Server on a multiple network card computer,
they listen on all available network addresses. This helps the
Designer connect to any Job Server over a VPN (with a fire wall).
The Designer can also listen for notification messages sent by a
Job Server.
When the Designer was behind a fire wall a communication
interruption occurred when communication with Job Server was
set for a range of ports. This issue is fixed.
16 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
26772In a Query transform, depending on the optimized execution
plan, Data Integrator sometimes incorrectly handled a WHERE
clause involving more than two tables when a column from one
table was joined with multiple columns from the other tables.
An example of this type of query is:
SELECT ... FROM t1, t2, t3 WHERE t1.c1 = t2.c1
AND t1.c1 = t3.c1
Where t1.c1 was joined with t2.c1 and t3.c1. This issue is fixed.
26759In a data flow, if you connected a template table directly to an
Oracle Applications table, metadata (in the repository) could be
different from the physical schema. As a result, if you attempted
to import a loaded table the Designer crashed. This issue is fixed.
26741Table owner names were not always changed as specified when
exporting from one repository to another. This issue is fixed.
26719When configuring a new Job Server to a DB2 repository using
svrcfg on AIX with Data Integrator versions 6.0.0.12 to 6.0.0.18,
svrcfg would return the error message, CON-120103: System
call <dlopen> to load and initialize functions
failed for <libdb2.a>. Please make sure the
shared library is installed and located
correctly. This issue is fixed.
26712When performing Calculate Usage Dependencies for an MS SQL
Server repository, Data Integrator assumed the owner of the
internal AL_USAGE table was dbo. With this fix, new MS SQL
Server repositories will work regardless of the owner name used
by AL_USAGE. For this fix to work with existing repositories,
you must:
1. Open $LINK_DIR\Admin\Repo\temp_caldep.atl.
2. Replace 'dbo' with the correct user name.
26697When you logged out of the Administrator, all Web services
would shut down. They did not automatically restart when you
logged back in. This fix prevents Web services from shutting
down when you log out of the Administrator.
Data Integrator Release Notes17
Data Integrator Release Notes
Resolved issues
26693The multi-user operation Get latest version of a job and its
dependents
state. This problem is fixed.
26691After loading a template table in a data flow and importing it
into Oracle, the local object library did not reflect all tables in
the datastore without shutting down and restarting the Designer.
This issue is fixed.
26638Data Integrator failed to handle a Case transform that contained
only one case. This issue is fixed.
26618Data Integrator failed with an access violation when optimizing
a job with a Query transform joining sources from different
datastores if the join contained the SUBSTR() function as part of
the WHERE clause. This issue is fixed.
26596Previously Data Integrator could not handle values larger than
Decimal(28, 7). This issue is fixed for Oracle. Using the
Designer, you can select
editor and enter a name for the file. Errors that occur while
reading data are logged into the overflow file and the job
execution proceeds while ignoring the rows that cause the error.
To set the location of the overflow file directory use the table’s
Datastore Editor.
would result in all objects in the job in a defunct
Use overflow file from the source table
26592Previously the Job Server was recognized only by its system name
rather than its fully qualified server name. This meant that the
Job Server was inaccessible if it was located in a different domain
from the Designer. This limitation is fixed.
To use the fully qualified name these steps must be followed:
1. Add the ""UseDomainName=TRUE"" under the [Repository]
section in the configuration file (DSConfig.txt). See Also:
“Changing Job Server options” on page 330 of the Data
Integrator Designer Guide.
2. Truncate the al_machine_info table in the repository.
18 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
3. Reconfigure the Job Server and Access Server by using the
Server Manager utility after installing ActaWorks (Data
Integrator) version 5.2.1.23 or higher.
26591A variable value was not updated if the variable was passed to
either a script function or to a database’s stored procedure as an
input or output parameter of a new function call. This problem
has been fixed.
26570On UNIX a job could not be scheduled for multiple days if the
job produced a crontab entry with more than 1000 characters.
In this release, the format for cron entries addresses this issue.
During installation, the existing cron entries for the "installation
user-id" will be automatically updated into the new format. The
Adminstrator will process the scheduled jobs correctly.
If the conversion fails due to the lack of system resources (disk,
file permission issue), resolve the resources and perform the
following to re-format cron entries using the "upgradeCron"
utility:
1. Source al_env.sh script
2. Run $LINK_DIR/bin/upgradeCron
Any errors are displayed on the console.
26542The bulk load control file was overwritten when a liveload
operation switched databases, which caused the bulk load to
fail. This fix prevents subsequent bulk load control files from
being overwritten.
26514Using the Unnest command in a Query transform for an output
schema column with the data type decimal could result in the
rounding off of the decimal value. This problem has been fixed.
26507Previously, when you tried to label an object with dependents in
the central object library, you would sometimes get an error such
as "Cannot load file source...". This issue is fixed.
26504The Merge transform did not create a new schema when one of
the inputs was an embedded data flow. This issue is fixed.
Data Integrator Release Notes19
Data Integrator Release Notes
Resolved issues
26495The Designer can now connect across a VPN and get job
execution notifications. In addition, the Designer now receives
job execution notifications through all network cards on the
computer on which it is installed.
26488Web services support has been enhanced. For migration issues
see “Web services support” on page 55. For documentation see
“Enhanced Web Services support” on page 5 of the Data
Integrator Release Summary.
26486The Designer would occasionally crash while processing job
logs. This issue is fixed.
26482AL_JobService on AIX 5.2 hung or crashed after starting a Job
Server. This problem is fixed.
26470When more than one user was working with a central repository,
if one user checked in an object that contained new objects
(such as a data flow) while another user attempted to checkout
the same object, defunct objects could result in the local
repository. This issue is fixed.
26467Data Integrator would fail to load into a Sybase table when it
had a column name length of 29 or more characters. This issue is
fixed.
26462An Oracle stored procedure (without a return value) could be
shown as a stored function (with a return value) resulting in a
run-time Oracle error. This problem has been fixed.
26459Checking out an object and its dependents without replacement
from a central repository (when the object did not exist in a local
repository) would result in a defunct object. This issue is fixed.
26453On the target editor for SQL Server the Use overflow files option
did not write bad rows to the file. This issue is fixed.
20 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
26449The Overflow file name field was enabled on the Options tab of
Oracle target table editor when the
File or API. With this fix, the Overflow file name field is now
disabled if
Loading Options
26420Executions hung for real-time jobs if Parallel process thread in
the File Format editor was set to greater than zero for a file
source. This issue is fixed.
26416If you chose to use the Delete data from table before loading
option on the Oracle target table editor, the Data Integrator
engine would first generate a truncate statement to clear a table.
If the Oracle datastore did not have truncate privileges, Data
Integrator would display an error message and then issue a
DELETE statement to clear the table. With this fix, Data
Integrator displays a warning message, rather than a error
message, if a truncation fails due to insufficient privileges.
26395In the Central Object Library window, if you attempted to switch
the repository using the drop-down list, Data Integrator returned
an error message indicating a logon exception. You were forced
to restart the Designer. This issue is fixed.
Bulk load option was set to
File or API is selected for bulk loading on the Bulk
tab.
26388With this fix, all Data Integrator server components will listen on
all the NIC cards attached to the computer they are running on.
With the previous default behavior, the server components were
listening only on one randomly chosen IP address.
26382Web services with TargetNamespaces declared did not have
the namespace provided on the message passed to Data
Integrator. Data Integrator rejected the message because of this
namespace conflict. With this fix, the namespace is passed into
Data Integrator messages.
26381See 26382.
Data Integrator Release Notes21
Data Integrator Release Notes
Resolved issues
26350If you selected a Case transform with all its outputs and tried to
make an embedded data flow, you would get an error.
Additionally, if you selected an object, which was directly
connected to the output of a Case transform, opened and then
canceled the Create Embedded Data Flow window, saved the
data flow, shut down Designer and restarted it, then the
connection between the Case transform and its output would
appear broken. Both of these issues have been fixed.
26349If a data flow had more than one target with the same name
(datastore.owner.table) and if one target failed due an error (for
example, a primary key violation) and if that target had the
overflow file
failed. This issue is fixed.
26329The Export operation did not include datastore information if
the job only contained a SQL transform. This issue is fixed.
26311Prior to 6.1.0.4, if you exported to an .atl, all the objects would
be exported from the central repository if both a local and a
central object library were open. This issue is fixed. Now Data
Integrator exports only from the local object library even if both
a central and a local object library are open.
Use
target table editor option selected, the whole job
26321See 25784.
26283An internal server error was displayed if you tried to log in to the
Administrator with a DB2 7.2 repository configured
immediately after starting the Access Server. This issue is fixed.
26264When a varchar column was mapped to a varchar column
of a smaller size in a fixed-with (positional) file target, then the
engine would throw a system exception. This issue is fixed.
26254A job with several LOOKUP_EXT functions in column mappings
failed with an access violation. This issue is fixed.
26222The Designer crashed when you exported a table that had only
one attribute in the repository. This issue is fixed.
22 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
26221The Designer crashed if the repository contained inconsistent
column and primary key information about a table. This
problem is fixed.
26185The Validation > Display Optimized SQL command failed for an
embedded data flow. This issue is fixed.
26148Process dumping using the stack track option did not work if the
USERDUMP_UTILITY_PATH or USERDUMP_TARGET_PATH
contained spaces. This issue is fixed.
26146The Data Integrator engine did not recognize row delimiters
when columns were missing. This issue is fixed.
26145Sybase money data was rounded to two decimal places instead
of four. This issue is fixed.
26144On UNIX the Job Launcher did not successfully run the job.
Either the process would hang or the job would finish
immediately without doing anything. This issue is fixed.
26143On the Datastore Profile Editor, if you clicked the Password field
then entered a password, the password was not saved correctly.
This issue is fixed.
26142Previously HP-UX would issue an insufficient memory warning
and terminate a job even though sufficient memory was
available. This issue is fixed.
26123Data Integrator failed to correctly import table metadata from an
Oracle 7.3 server. This has been fixed.
26122Previously you could not modify the FROM clause for an NRDM
schema. With this fix you can populate the FROM clause and
then map the elements you wish.
26121Previously you could not call Data Integrator as a Web service
from .NET. Now you can. The WSDL is enhanced in version
6.5.0.1.
26112The IDoc loader raised an access violation at run time for an
incompatible schema. This issue is fixed.
Data Integrator Release Notes23
Data Integrator Release Notes
Resolved issues
26107In this release, you can replicate a custom function.
26103During the execution of a data flow, an access violation
randomly occurred when an error was encountered. This issue is
fixed.
26102OR predicates involving more than one table were not included
in any SELECT statement where a FROM clause contained more
than one table. This caused less efficient ABAP programs to be
generated when Data Integrator was using the SAP R/3 interface.
This problem is fixed.
26098Data Integrator did not extract varchar2 data type columns
correctly from an Oracle 7.3 server. This has been fixed.
26065A job with a WHILE loop in a custom function would return
incorrect results when run in View Data mode. This has been
fixed.
26062When the engine is running in multi-process mode, the parent
process passes global variables, along with other information to
the child processes. If the information size was greater than
15,000 bytes, the engine would throw an error. With this fix, the
engine will accommodate any message size.
26060When using the Oracle Applications interface, after you selected
Open datastore command the Designer did not display the
the
external tables if the language setting of the datastore was a
mismatch with the operating system language setting of the
computer on which the Designer was installed. This problem is
fixed.
26055When using an adapter that requires more memory than is
available in the Java environment, the Data Integrator Adapter
SDK did not properly process the OutOfMemory exception.
24 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
This error is fixed. Now, when the OutOfMemory exception is
thrown, the Adapter SDK stops the erroneous operation and
sends an error reply to the engine.
NOTE: An OutOfMemory exception is handled differently if it
occurs in the Java communication layer than when an adapter
receives messages from the engine. In this situation, the adapter
throws an error message and stops while the engine that sent the
message times out. The Timeout value is controlled by
AdapterDataExchangeTimeout property of the <int>
section of the DSConfig.txt file. The default is 10800000 msec. If
the OutOfMemory exception occurs, go to the adapter instance
configuration screen in the Administrator and increase your Java
launcher maximum memory parameter -Xmx. Recommended
default parameters are: -Xmx64m -Xmx256m. Minimum
memory size required is: -Xmx4m.
26039When there is an error in a mapping expression, clicking the
GoTo error message will now take you to the right place.
26025When loading a table using an RFC call, not all return codes
were passed on. This issue is fixed.
25991See 25798.
25954Code pages with user-defined characters were not previously
supported. In this release, code pages with user-defined
characters are supported by the Data Integrator engine.
25946When executing a job using the execute_preloaded option for an
SAP R/3 datastore, the job class determination code was
incorrect. This issue is resolved.
25930In this release, you can activate the stack trace option, creating a
dump file once an unknown exception occurs in the Designer.
The dump file can be used for debugging. To do this:
The resultant dump file is named BOBJ.DI.DMP followed by
date, time, and process ID. The file extension is DMP.
25903The PRINT function in Chapter 6 of the Data Integrator
Reference Guide has been updated as follows:
RETURN VALUE
IntValue is 0 when the string contains valid data. Value is NULL
and no string prints when the string contains NULL data.
25902When an integer overflow occurred, Data Integrator would fail
with an access violation. This has been fixed.
25881When refreshing the object library with a file import operation,
garbage characters would sometimes be seen in the description
columns of the datastore view (or other tree views). This only
occurred when starting with a clean repository followed by
importing a large .atl file with the datastore (or other tree view
pane) selected. This issue is fixed.
25846Web service calls to .NET services failed because the namespace
on messages received from the service did not match the
namespace defined during metadata import. This issue is fixed.
25813In some cases hierarchical extraction from SAP R/3 returned
incorrect results. This has been fixed.
25799When doing a table comparison, if the Oracle target table did
not exist the job would hang. This problem has been fixed.
25798Jobs with a Case transform as part of a self-join failed with an
access violation during optimization. This problem has been
resolved.
26 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
25788While running a SAP R/3 job in FTP transfer mode, there was no
"FTP Completed ..." trace message displayed even if the FTP
completed successfully. With this fix, the trace message appears.
25785When running a job with Capture data for viewing selected in
the Execution Properties window, the job would randomly stop
with an error stating Cannot start thread. This problem is
fixed.
25784In Chapter 20 of the Designer Guide, section Undoing check-out
states that when you undo the check-out of an object the local
copy of that object is changed to match the object in the central
repository. This behavior has been changed in this release. When
you undo the check-out of an object, the local version is kept. If
you want the local object to be an exact copy of the central
object, perform a
25741When the Data Integrator engine was using a multi-byte code
page setting for either a Job Server or datastore and processing a
job with many local or global variables an access violation
would randomly occur. This issue is fixed.
Get latest operation on that object.
25727Oracle stored procedures were handled as functions within Data
Integrator. This caused jobs using Oracle stored procedures to
fail. This problem is fixed.
25703On a Windows Job Server, when EXEC() returned more that
1020 characters (the maximum number of characters returned by
this function), Data Integrator failed. This issue is fixed.
25688Central repository check-in did not detect schema changes and
could corrupt the objects being checked in. This issue is fixed.
25660See 25653.
25658When you replicated a job in the Designer with the Perform
complete validation before job execution
Tools > Options > Designer > General, an identical set of
parameters was created (such as a second set of defined global
variables) in some data flows. This issue is fixed.
option selected from
Data Integrator Release Notes27
Data Integrator Release Notes
Resolved issues
25656An access violation was raised in a job execution when a data
flow’s source or target tables referenced a non-UTF8 datastore
while a script in the same job referenced a UTF8 datastore, or a
long varchar global variable such as 255. This issue is fixed in
this release.
25655The Designer and Administrator could not launch a job which
contained more than 32 global variables. This issue is fixed.
25653IDoc processing failed with the following error on HP UX
because .ido files were being created on the database by default
during a run:
ERROR: Failure reading metadata file MATMAS03__46B.ido: it
was generated using incompatible SAP library product (fields)
Please delete all *.ido files and try again (IDoc /)
This issue is fixed
25598The overflow file for an MS SQL Server target is now written with
proper data in the case of loader failure.
25574Data Integrator processes IDocs faster than in previous releases.
25557Loading a partitioned table on Microsoft SQL Server could fail
with a connection error on a multi-processor system. This issue
is fixed.
25556The engine’s data flow clean-up operation would wait five
seconds prior to execution. This fix removes the five second
sleep.
25485An issue in the multi-user feature might cause the GUIDs in the
repository of Data Integrator versions 5.2, 6.0, and 6.1 to
become corrupted. In version 6.1.0.1 we fixed this problem and
also added a utility to correct exiting errors. In this release, Data
Integrator provides the fixupguid65.exe utility in its
$LINK_DIR/bin directory.
Use this utility to correct a corrupted repository. This utility is
available on Windows platforms only. This utility also in the
Utilities directory on the CD.
28 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
The command line arguments are:
fixupguidxx.exe -U<user_name> -S<server>
-P<password> -D<db_type> [-N<db_name>]
Where <db_type> could be one of the following choices:
•Microsoft_SQL_Server
•Oracle
•Sybase
•Informix
•DB2
NOTE: The last option (-N) is required for Microsoft SQL Server
and Sybase.
When the utility executes, it prints to a standard out and also to a
log file. The log file is called fixupguid.log and is located in the
$LINK_DIR/log directory. The log contains the information
about objects with corrupted GUID's and also lists corrective
action the utility is performing. All error messages are logged in
the $LINK_DIR/log/errorlog.txt file.
The utility performs the repair as one transaction. This utility can
be run multiple times against the same repository.
25483When using the central repository’s Check in with filtering
option, the newly added table object was not shown in the
version control confirm window. This issue is fixed.
25479Using the Column Properties window in Designer, you can set a
column's
Associated_Dimension attribute to a table.column name to
associate a column to a dimension. However, the
Update commands under Tools > Business Objects Universes did
Column_Usage value to Detail and set the
Create and
Data Integrator Release Notes29
Data Integrator Release Notes
Resolved issues
not pick up objects accessed as a value for the Column_Usage
attribute when using the Universal Metadata Bridge version
1.1.1. In this release, which includes Universal Metadata Bridge
version 6.1.0.1, Detail columns are displayed in the bridge and
in the universe.
25433A real-time job on an multi-processor system that processes an
XML message and then routes it to several subsequent jobs could
create a corrupted output XML message. Prior to this release, you
had to recycle the real-time service for each request to avoid this
message corruption. With this fix, the output XML message is no
longer corrupted. You no longer need to recycle the real-time
service for each request.
25426Data Integrator writes data in its default datetime format to a flat
file instead of using the file’s format when there is a mismatch in
the number of input and output columns. With this fix, if a file
format is defined, it will be used.
25369A job containing a LOOKUP_EXT function would intermittently
fail during optimization. This issue is fixed.
25303The Reference Guide did not explain how Data Integrator treats
empty strings during assignment and how it treats empty strings
and NULLs in expressions. A section called Empty stings and NULLS has been added to the Scripting chapter of the Reference
Guide.
25275In some situations, the Auto-correct load operation for an MS
SQL Server target dropped updated or inserted rows. This issue is
fixed.
25195An XML message with a NULL element value caused an access
violation in the engine. With this fix, the NULL value is set with
the proper encoding so that the message is successfully
processed.
25112Upgrading the repositories from ActaWorks 5.1 to Data
Integrator 6.x failed when executing:
30 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
ALTER TABLE AL_COLUMN ADD (PRIMARY KEY
(TABLEKEY, POSITION) )
This problem has been fixed.
25096For a real-time job with an MS SQL Server 2000 target table, Data
Integrator would lock the SQL Server SYSINDEXES table in
TempDB. With this fix, the SYSINDEXES table is no longer
locked.
25063In this release, the Oracle target table editor uses a defined
commit size for cursor-based INSERT...SELECT statements.
Define the commit size on the
commit
zero. The default value is 0, which means that the
INSERT...SELECT statement is committed at the end of the entire
statement.
NOTE: The layout of the target table editor is rearranged in this
release.
Options tab using the Rows per
option. This option may be set to a positive number or
25047The datastore Search command did not search the repository or
search for objects other than tables. This issue is fixed.
25004The Reference Guide now has additional information about
parameters and data flows.
•The Functions chapter on page 329 under Operation of a function now reads, “Parameters can be output by a work
flow but not by a data flow. “
•On page 336 under To create a custom function, step 8 Define
parameter properties by choosing a data type and a
parameter type (Input, Output, or Input/Output) now reads,
“Data Integrator data flows cannot pass variable parameters
of type Output or Input/Output.”
•The Variables chapter on page 292 under Parameters now
reads, “Parameters can be defined to:
!Pass their values into and out of work flows
!Pass their values into data flows”
Data Integrator Release Notes31
Data Integrator Release Notes
Resolved issues
24981When using an MS SQL Server database, if a datetime column
was used in a WHERE clause, the SQL generated by the Data
Integrator engine failed to UPDATE, SELECT or DELETE all the
required rows in the database. This problem has been fixed.
24977Variables can now be used as filenames in the LOOKUP_EXT
function. Now you can design jobs to use LOOKUP_EXT
without having to hard-code the name of a lookup file at design
time. For additional information, see LOOKUP_EXT in the
Functions chapter of the Reference Guide.
24907When using WEBI version 5, documents produced using
BusinessObjects only appeared in Data Integrator’s Metadata
Reports utility if they had been refreshed using WEBI InfoView.
At times, WEBI stopped logging information and the reports did
not appear. This issue is fixed.
24876After migrating from MS SQL Server 7.0 to SQL Server 2000, a
unique constraint violation could occur. This was due to case
sensitivity issues with table owner names. This issue is fixed.
24808After executing a job in which you defined global variables and
selected the
option (from the
you performed an export or used the multi-user development
interface, an unexpected set of work flow or data flow level
parameters appeared. This issue is fixed.
24780Impact Analysis did not report on memory tables as source data
to Business Objects Universes or Documents. This issue is fixed.
24774Impact Analysis did not report on XML messages as source data
to Business Objects Universes or Documents. This issue is fixed.
24688When the Delete data from table before loading option on the
target table editor was used for an MS SQL Server table, the SQL
operation was slow. This issue is fixed.
24645If the Administrator was not on the local host, you could not
open
Tools > Metadata Reports from the Data Integrator
Designer. This issue is fixed.
32 Data Integrator Release Notes
Perform complete validation before job execution
Tools > Options > Designer > General menu), if
Data Integrator Release Notes
Resolved issues
24565Data Integrator will now validate schedule names prior to saving
them.
24512The Refresh report content section on page 256 of the Designer
Guide has been updated to reflect how the
Mappings
24303Neither the API method of the Bulk load option on the Oracle
target table editor nor the
partitioned by range if it had a primary key.
•The Oracle bulk loader would fail with the error message:
ORA - 26002 Table FOO. has index defined
upon it.
•SQL Loader would fail with error message: (sql *loader
-951 ) ORA-26017
This issue is fixed.
24154The Designer hung when mapping MS SQL tables with column
names ending with a period. With this fix the Designer will no
longer hang. Note that you are still required to follow standard
convention for putting quotes around a column name which
includes punctuation.
Calculate Column
command interacts with the ALVW_Mapping view.
SQL Loader could load a table that was
23917Data Integrator's Auto correct load option (for target tables)
failed for NCHAR or NVARCHAR data types if the code page in
the Oracle database did not support these characters in the
pushdown SQL. For example, the following situations were
known to cause problems:
•The database code page was English, WE8ISO88591
•The code page of NCHAR column in the database was UCS2
•The submitted SQL had Japanese characters.
This issue is fixed.
23823If you logged into the Designer after stopping and restarting the
Data Integrator Service, prior jobs were not visible in the
With this fix, previously run jobs are now visible.
Data Integrator Release Notes33
Log tab.
Data Integrator Release Notes
Resolved issues
23765Using a very large script would cause an internal exception in the
Designer. This issue is fixed.
23689The Designer will now show job logs when third-party
schedulers are used.
23525When Data Integrator generates an SAP R/3 ABAP program it
adds the line:
write '* Program Complete * Copyright Business
Objects Data Integration, Inc.'
This creates a problem when a client connects to the output
printed by an ABAP program. A new configuration parameter,
has been added to suppresses the printing of this line to ABAP
programs.
To set the option, from the Designer select
Server > General
and enter al_engine for the Section,
PrintAbapProgramComplete for the
Va l ue . The default is TRUE.
Tools > Options > Job
Key and FALSE for the
23474When an expression for a column mapping was larger than 2198
characters, Data Integrator failed with a parsing error. This
problem has been fixed.
21 96 5Text data converted to integer data that had more than 10 digits
potentially produced incorrect output without an error message.
This issue is fixed.
21 88 4Deleting logs before the current log had finished scrolling to the
bottom of the workspace would cause Data Integrator to freeze.
This issue is fixed.
21690Refresh issues in the Designer have been addressed. Now when
you make updates to datastore login dialogs during a repository
export procedure, the changes are reflected in subsequent
dialogs. For example, if you are exporting datastores to one
repository and you notice that a file exists in the target
34 Data Integrator Release Notes
Data Integrator Release Notes
Resolved issues
repository, you can press the Back button and change login
information to point to another datastore. The datastore page of
the export wizard then updates to tell you if the table exists in
the new datastore. The previous behavior did not refresh this
information when you changed a datastore login.
21 416The Data Integrator validation functions is_valid_date,
is_valid_datetime, is_valid_time were returning
warning messages for every invalid date. This produced a very
large log file and took excessive time to run. Beginning with this
release, warning messages do not occur. You can determine if the
date is valid from the return value of these functions.
21 25 2When using a vertical Hierarchy_Flattening transform,
standalone elements would loose their attributes.
Two cases have been fixed:
•Parent column is NULL but child column is NOT NULL. This
is a child node without a parent node. Previously Data
Integrator would output a row with parent and child
attributes set to NULL. With this fix the output row will
contain the child attributes obtained from the input row. The
parent attributes of the output row are still set to NULL.
•Parent column is NOT NULL but child column is NULL. This
is a parent node without children. Previously Data Integrator
would output a row with parent and child attributes set to
NULL. With this fix the output row will contain the parent
attributes obtained from the input row. The child attributes
of the output row are still set to NULL.
20914The data that the SYSDATE and SYSTIME functions actually read
was not documented. The methods that can be used to format or
trim date or time data that is read by these functions was not
documented. The documentation for the SYSDATE and SYSTIME
functions is now updated.
Data Integrator Release Notes35
Data Integrator Release Notes
Resolved issues
20451When column data types did not match in the WHERE clause of
an outer join, Data Integrator performance was severely
degraded compared to performance when all column types
matched. This issue is fixed.
20379The following validation error sometimes appeared when
editing jobs that included variables that differed only in the case:
"Duplicate name detected: Global variable <$LOAD_TYPE> and
parameter <$load_type>.Global variables MUST have unique
names." This validation error no longer appears.
17935If you checked out a template table from a central repository
then changed it into a database table (using the "Import Table"
command), Data Integrator did not preserve the checked-out
status of the table. This issue has been fixed.
36 Data Integrator Release Notes
Data Integrator Release Notes
Known issues
Known issues
The following notes list known issues in descending order by
Business Objects tracking number.
27974On Solaris, DataDirect Technology’s Informix client does not
support using the ONSOCTCP communication protocol. Until
DataDirect provides a fix, they recommend you use either
ONTLITCP or SETLITCP communication protocols.
By default, Data Integrator uses the ONSOCTCP communication
protocol. Therefore, on Solaris, you must change the
communication protocol configuration found in the Informix
SQLHOSTS and the ODBC.ini files to use either ONTLITCP or
SETLITCP communication protocols.
27973Because the file name of the Universal Metadata Bridge (UMB)
Guide (universal_metadata_bridge_guide.pdf) is longer than the
27.3 format limits for Data Integrator UNIX media, the file name
is truncated and becomes
"UNIVERSAL_METADATA_BRIDGE_G.PDF". If you install the
UMB from the UNIX package, this file will not be copied to the
UMB installation folder/doc directory and you will not able to
open the UMB Guide from the UMB Designer Help menu.
If you install the UMB from the UNIX package, you must
manually copy the "UNIVERSAL_METADATA_BRIDGE_G.PDF"
file from CD directory\BusinessObjectsMetadataBridge\Guide\EN
to the UMB installation folder\doc\ folder and rename the PDF to
“universal_metadata_bridge_guide.pdf”.
27911In embedded data flows, column mapping is calculated for
parent data flows only. Child data flow information is not stored
in repository tables.
27903Refresh is not working properly in the Projects tab of a central
repository.
27419When using WEBI version 6, BusinessObjects documents do not
appear in Data Integrator Metadata Reports when the Auditor
database is DB2.
Data Integrator Release Notes37
Data Integrator Release Notes
Known issues
27397You cannot save a data flow’s description when changing it in
the workspace, while you can if changing it from the object
library.
Work Arou nd: Update descriptions using the object library.
27390When using the Get by label option of the multi-user
development interface with a non-existent label, Data Integrator
returns the latest version of the objects.
27342Data Integrator takes too long to return data from an SAP R/3
table to the
27336See 27335.
27335If you are using the PeopleSoft HR Rapid Mart version 2.0, make
the following two modifications to run it with Data Integrator
version 6.5:
•In the DF_DeltaCurrencyCodes_PSF data flow, set the query
transform to set the Order By option prior to the
Table_comparison transform (sorted input). This allows the
comparison to function properly.
Profile tab of the View Data feature.
•In the PSHR_Create_Ora_Indexes, change Unique Index to
Index in the SQL script. In the job, some of the data flows
(for example, the DF_CurrencyCodes_PS data flow),
generate a Key field for the target table, which might cause
conflicts if you do not make this change.
27048Column Profiling is not supported in the View Data feature on
SAP R/3 cluster and pool tables.
26355When comparison tables are on SQL Server, the Table
Comparison transform’s
results. The Data Integrator engine runs in binary collation while
SQL Server database runs in a Latin1 collation sequence. As a
result, SQL Server generates the data in a different sort order.
This issue will be addressed in a later release.
38 Data Integrator Release Notes
Sorted input option generates incorrect
Data Integrator Release Notes
Known issues
26015When loading to a DB2 version 8 database, the DB2 Load Utility
fails when the
unchecked on the target table editor’s
The job terminates after the load goes through.
Work Arou nd: Check the Data File On Client Machine option
while loading to DB2 version 8.
26005You cannot add a comment about a column mapping clause in a
Query transform. For example, the following syntax is not
supported on the
table.column ## comment
The job will not run and the job cannot be successfully exported.
Work Arou nd: Use the object description or workspace
annotation feature instead.
Data File On Client Machine option is left
Bulk Loader Options tab.
Mapping property sheet:
Data Integrator Release Notes39
Data Integrator Release Notes
Known issues
25815In the Designer when inserting, updating, or deleting metadata
for a table column, the information value is not saved.
Work Arou nd:
In the Class Attributes tab of a table’s Properties dialog, add
1.
new class attributes.
2. Close all windows and restart the Designer.
3. In the Attributes tab of a table’s Properties dialog, specify the
values for the new class attributes.
4. Click OK.
25713You may get the following error during execution of a Data
Integrator job, which contains an Oracle stored procedure call
<Schema2>.<Func> and the user name of the datastore that
contains the function is <Schema1>:
DBS-070301: Oracle <ora_connection> error message for
operation <OCIStmtExecute>: <ORA-06550:
PLS-00225: subprogram or cursor '<Schema2>' reference is out
of scope
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored>.
Work Arou nd: Check the schema <Schema1> to which the
Oracle datastore belongs for any procedure with the name
<Schema2>. If the procedure exists, rename it.
25579The MQ queue length must be three times larger than the actual
XML message size.
25578When processing large MQ messages, you must use IBM's AMI
repository. If you do not use the AMI repository, the PUT
operation could fail.
40 Data Integrator Release Notes
Data Integrator Release Notes
Known issues
25335An Oracle error, "Cannot bind a LONG value to
VARCHAR" may occur when executing a job containing a
varchar column greater than 444. This happens when the code
page setting in a datastore, file format, Job Server, database
client, and database server are not the same. Due to the different
code page settings, Data Integrator’s transcoding operation may
extend the internal size to Oracle’s limitation (4000 for varchar).
Work Arou nd: Set all code pages to the same value.
24749For multi-byte code pages, the multi-byte engine holds a greater
number of bytes than you allocate. For example, varchar(3)
in ms1252 allocates up to three bytes, but varchar(3) in
utf8 allocates up to nine bytes. When printing data to a flat file,
the code page engine prints nine bytes. The engine should print
only three characters, but doing this has an adverse impact on
performance. So all nine characters are printed.
24746Data Integrator cannot import metadata through an adapter if it
includes a DTD or XML Schema that begins with the header
<?xml encoding="UTF-8"?>.
Work Arou nd: Remove the header line.
24416The ODBC datastore does not support NCHAR or NVARCHAR in
the 6.5 release due to the following limitations:
•ODBC does not import NCHAR or NVARCHAR columns
•If the data type is NCHAR or NVARCHAR, the column size
might be imported incorrectly
24406View Data (formally known as Data Scan) does not work for a
case transform if the transform name is a reserved keyword. By
default, Data Integrator generates “case” as the transform name
for the case transform. Since case is a reserved word, View Data
will not work if you leave the transform name unchanged.
Work Arou nd: Change the transform name to a non-reserved
word name, like case1, case2, etc.
Data Integrator Release Notes41
Data Integrator Release Notes
Known issues
24379Jobs running on AIX, that call the system_user_name() function
from a transform consume too much memory. This is due to a
bug in AIX.
Work Arou nd: Initiate a global variable to read the
system_user_name() and use the global variable in a transform
or install a version of libc containing APAR IY45175 from IBM.
24223When bulk loading to Oracle using the API method, Data
Integrator cannot load a case-sensitive partitioned target table
name.
The following error message is displayed:
ORA - 02149. The specified partition does not
exist.
Work Arou nd: Use the File method and select the Direct path
option to handle case-sensitive partition names.
24195For the DB2 database, when the Data Integrator engine pushes
down a SELECT statement and the SELECT statement has an
upper/lower function for a graphic/vargraphic data type,
then the describe column (ODBC library call) returns a char
data type instead of a graphic data type. This is an ODBC bug.
Work Arou nd: Set the datastore’s code page to utf8 (or a
similar superset code page).
24013When you specify length (lpad(' ',41) for the varchar
data type it returns different lengths when it is pushed down to
an Oracle database if the bind column is NCHAR or
NVARCHAR2. The issue is with the Oracle OCI client.
Work Arou nd: Do not use Data Integrator’s Auto-correct load
option (on the target table editor) for an Oracle table when you
want to specify length (lpad(' ',41) for the varchar
data type.
42 Data Integrator Release Notes
Data Integrator Release Notes
Known issues
23941For Oracle, the load trigger for an NCHAR or NVARCHAR data type
fails if the code page in the database does not support these
characters in the pushdown SQL. Consequently, Oracle fails to
execute the SQL properly. For example, the following situations
are known to cause problems:
•The database code page is english, we8ISO88591
•The code page of the NCHAR column in the database is ucs2
•The submitted SQL has Japanese characters
Work Arou nd: Do not use Data Integrator’s Auto-correct load
option (on the target table editor) when processing NCHAR or
NVARCHAR data types.
23938If the code page used by the Oracle client is set to certain values
(it works for utf8 and fails for we8winms1252) and if the bind column size is greater than 1312, and if Data Integrator
uses the same Oracle connection more than twice during
internal processing, then Oracle will throw the following error:
ORA-01460: unimplemented or unreasonable
conversion requested
23852For MS SQL Server, if a concatenation operator is used in a
stored procedure, and if the code page for the database is ucs2,
then the ODBC library fails to concatenate the data properly. The
issue is with the Microsoft ODBC driver.
Work Arou nd: Do not allow Data Integrator to push down a
concatenation expression to the database. One way to do this is
to construct job so that data is loaded into a flat file and the
concatenation operation occurs in another query. Thus, Data
Integrator cannot pushdown the SELECT statement because the
target does not have the operation it needs.
Data Integrator Release Notes43
Data Integrator Release Notes
Known issues
23816The validation for a real-time job might have an error after
upgrading the repository from a previous version of Data
Integrator to Data Integrator version 6.0 or higher. This could
occur because some attributes have not been populated properly.
Work Arou nd: After upgrading your repository:
1. From the Designer’s object library, right-click a real-time job
and select
2. Click a data flow’s name to open a data flow in your
real-time job.
3. Save the data flow by selecting Project > Save All.
4. Check to see that the new version of Data Integrator has the
correct information for your data flow by validating the job
using the
5. Repeat these steps for each data flow in your upgraded
real-time jobs.
Open.
Debug > Validate > All objects in view option.
23811For Oracle, NCHAR or NVARCHAR data types do not work for
database procedures and database functions when one of the
parameters has an NCHAR or NVARCHAR column and the
datastore code page set in Data Integrator does not understand
NCHARorNVARCHAR encoding. In these cases, the Data
Integrator engine handles NCHAR or NVARCHARs as if they were
varchar data types.
23687After you restart an adapter instance, the service that uses it fails
to process the next message it receives.
Work Around: When you restart an adapter instance, also restart
its associated services.
23097Informix datastore tables once imported to the Data Integrator
repository cannot be deleted.
Work Arou nd: Shut down Designer and re-open it.
44 Data Integrator Release Notes
Data Integrator Release Notes
Known issues
23005When reading XML schema data, you may encounter the
following error message:
Type of attribute 'lang' must be validly
derived from type of attribute in base...
Work Arou nd: Change your schema to avoid
use="prohibited" for any attribute or element whose type
is derived from xsd:language.
22979If you are using the Administrator and you disconnect and
reconnect your computer from your network or change your
computer's IP addresses, the Administrator will display a
database connection error.
Work Arou nd: In the Administrator, change your repository
name. This change forces the Administrator to drop and recreate
the connection to the database.
22821In this release, you may see an error message when you try to
import an XML file through the Metadata Exchange CWM
interface when the XML file was generated by Data Integrator.
Work Arou nd: Delete the line in the file that refers to
CWM_1.0.DTD before importing.
22690If you have a data flow that includes mapping a varchar
column to a datetime column in a query transform, the job
will give an access violation error if any of the date values are
invalid.
21 65 9The Adapter SDK has some syntax limitations for DTDs that can
be imported for use as an adapter source or target:
•Conditional syntax cannot be included in the DTD file.
Sample syntax that fails:
<!-- Common ebXML header-->
<!ENTITY % INCLUDE_EBHEADER "INCLUDE">
<![ %INCLUDE_EBHEADER;
Data Integrator Release Notes45
Data Integrator Release Notes
Known issues
[<!ENTITY % eb:MessageHeader SYSTEM
"ebHeader.dtd">%eb:MessageHeader;]]>
Work Arou nd: The following syntax is supported and
produces the same result:
<!ENTITY % eb:MessageHeader SYSTEM
"ebHeader.dtd"> %eb:MessageHeader;
•XML version information cannot be included in the DTD
file. Sample syntax that fails:
"<?xml version="1.0" encoding="UTF-8"?>"
Work Arou nd: Eliminate the line from the file.
NOTE: These limitations only apply to DTD files read by an
adapter, not those read by the Designer. It is important to note
that the only file the adapter reads is the one configured in the
adapter operation. Adapters do not read embedded DTD files.
20995XML targets expect a perfect match between the target's schema
and its input schema. In this release, you do not have to provide
mapping expressions for columns that are optional in the DTD.
This change allows you to cut and paste the target's schema into
the preceding query transform and then map only the needed
columns and all nested tables.
19945If, just prior to importing a job, a change is made to the job (as
simple as moving an icon), upon completion of the import you
will be prompted to save the changes to the job. If you respond
Ye s , the import will be overwritten.
Work Arou nd: Either do not save the changes when prompted or
re-import the job.
46 Data Integrator Release Notes
Data Integrator Release Notes
Known issues
19893The mail_to function does not work on Windows 2000 or
Windows NT, after downloading a security patch for Microsoft
Outlook.
Work Arou nd: To send e-mail out from Data Integrator after
installing a security patch, follow the instructions from the
following web site:
The instructions on the web site show how to suppress
prompting when sending e-mail from a particular computer.
First a security policy file is created on the exchange server.
Second, the registry key is added to a client machine that tries to
send e-mail programmatically. This client machine is the
machine on which you are running the Data Integrator Job
Server. When a client machine tries to send an e-mail, the
Outlook client first checks the registry key. If this key is set,
Outlook looks at the security policy file on the exchange server.
In the security policy file, turn off prompting and the Data
Integrator mail_to function will work.
18907On UNIX systems running LiveLoad, you must set three
LiveLoad parameters in the UNIX Job Server configuration file:
• LLReconcilePendingTimeout
• LLLivePendingTimeoutExt
• LLLivePendingTimeout
These values must be the same in the configuration file on the
Designer’s machine. Contact Business Objects Technical Support
for information about how to set these parameters.
Data Integrator Release Notes47
Data Integrator Release Notes
Known issues
18519Using the Server Manager to configure new Job Server instances
causes the Data Integrator Service to shut down and restart. As a
result, running Job Servers and Access Servers are shut down and
restarted. In most cases, this process is normal and reaches
equilibrium quickly. However, when one of the Job Servers is
managing an adapter instance, the shutdown process takes
longer.
During the shutdown, the Data Integrator Service attempts to
start a new instance of the Job Server which in turn starts another
instance of the adapter. The result is that many instances of the
Job Server and adapter are trying to run at one time.
Work Arou nd: Stop the Data Integrator Service before making
changes through the Server Manager when your system includes
adapters.
18230The key_generation function/transform can produce an
unexpected value when used against a table in the same data
flow that is truncated before loading. The key_generation
function determines the appropriate key value (the maximum of
the value from the table) before the table is truncated.
Work Arou nd: Use another method to reset the key to 1 each
time the data flow is executed. For example, consider including
the command to truncate the table as part of a script that is
called before running the data flow.
17538This release does not support bulk loading data to DB2
databases running on AS/400 or MVS systems. However, the
options to turn on bulk loading continue to be available in the
Designer. Data Integrator does not make a distinction between
DB2 operating system for bulk loading.
48 Data Integrator Release Notes
Data Integrator Release Notes
Migration considerations
Migration considerations
The following sections describe changes in the behavior of Data
Integrator from previous releases. In most cases, the new version
avoids changes that would cause existing applications to change
their results. However, under some circumstances a change has
been deemed worthwhile or unavoidable.
This section includes:
•Brief installation instructions
•Execution of to_date and to_char functions
•Changes to Designer licensing
•License files and remote access software
•Administrator Repository Login
•Administrator Users
•Web services support
•LiveLoad support
•Sybase bulk loader library on UNIX
Brief installation instructions
Complete installation instructions are in the Data Integrator
Getting Started Guide (for the stand-alone version, open the
DIGettingStartedGuide.pdf in the Document folder on the
installation CD).
" To install without a previous version of Data Integrator:
Follow the instructions in the Data Integrator Getting Started
Guide to create a database location for a Data Integrator
repository, run the installation program, and register the
repository in the Data Integrator Administrator.
Data Integrator Release Notes49
Data Integrator Release Notes
Migration considerations
" To install with an existing version of Data Integrator:
1. Using your existing Data Integrator or ActaWorks version,
export the existing repository to a file as a backup.
2. Create a new repository and import the ATL file created in
step 1.
3. To use configuration settings from the previous version of
Data Integrator or ActaWorks, create the new installation
directory structure and copy the following files from your
Data Integrator or ActaWorks installation into it. By default
in Windows, the installation structure is:
Administrator settings\conf directory
Repository, Job Server, and
Designer settings
\bin\dsconfig.txt
4. Uninstall your previous version of Data Integrator or
ActaWorks.
Business Objects recommends uninstalling, particularly
when you are upgrading from ActaWorks to Data Integrator.
5. Install Data Integrator in the default location given by the
install program.
If you set up an installation path in step 3, use that path.
6. During the installation, upgrade the existing repository to
6.5.0.1.
7. If you used the configuration settings from the previous
version, validate that the existing Job Server and Access
Server configuration appears when you reach the server
configuration section of the installer.
8. After the installation is complete, open the Administrator at
the browser location
50 Data Integrator Release Notes
Data Integrator Release Notes
Migration considerations
http://<hostname>:28080
9. Re-register (add) the repository.
Browsers must support applets and have Java enabled
The Administrator and the Metadata Reporting tool require
support for Java applets, which is provided by the Java Plug-in. If
your computer does not have the Java Plug-in, you must
download it and make sure that your security settings enable
Java to run in your browser. Business Objects recommends you
use Java Plug-in 1.3.1.
To find a plug-in go to:
http://java.sun.com/products/plugin/downloads/index.html.
Execution of to_date and to_char functions
Prior to release 6.0, Data Integrator did not always push the
functions to_date and to_char to the source database for
execution when it was possible based on the data flow logic. The
fix for this improved optimization requires that the data format
specified in the function match the format of the extracted data
exactly. The result can be significant performance improvement
in flows that use these functions.
It may be that your existing function calls do not require a
change because the format already matches the data being
extracted. If the formats do not match, this change is required. If
your function calls do not correspond to this new format
requirement, you will see an error at runtime.
Compare the examples below with your to_date or to_char
function calls to determine if a change is needed.
For example, the WHERE clause of the query transform in the
DF_DeltaSalesOrderStage of the Customer Order Rapid
Mart (was eCache) includes the following function calls:
Data Integrator Release Notes51
Data Integrator Release Notes
Migration considerations
WHERE
(to_date(to_char(SALES_ORDER_XACT.LOAD_DATE
, 'YYYY.MM.DD HH24:MI:SS'), YYYY.MM.DD')
Similarly, the SELECT list of the ConvertTimes query transform
in the DF_Capacity_WorkCenter data flow of the
Profitability Rapid Mart, contains the following function call(s):
Data Integrator Designer version 6.5 does not support floating
licenses. If you are currently using floating Designer licenses,
obtain a new license key before upgrading. Send your request to
licensekey@businessobjects.com with Data Integration License
Keys as the subject line. The new key will be added to your
existing License Authorization Code.
For more information, see the Online Customer Support site at
http://www.techsupport.businessobjects.com. Search for the
Article titled “Business Objects Data Integrator 6.5: Changes to
Designer License Keys Q&A”.
License files and remote access software
In release 6.0 and higher, Data Integrator includes a specific
change to license support when activating Data Integrator
components across a terminal server, such as PC Anywhere or
Terminal Services Client. You must request new license files to
use release 6.x when using a terminal server. Specifically, you
require:
•New Designer licenses to use a terminal server to initiate a
Designer session across a terminal server connection (when
using evaluation or emergency licenses, or node-locked
licenses)
•New Job Server and Access Server licenses to open the Service
Manager Utility across a terminal server connection (when
using any type of license)
If you do not make the license change, you may see the error
“Terminal Server remote client not allowed.” If you require
terminal services contact licensekey@businessobjects.com.
Data Integrator Release Notes53
Data Integrator Release Notes
Migration considerations
Administrator Repository Login
In release 6.0 and higher, in the Data Integrator Administrator,
all the repository connectivity occurs using JDBC and hence the
previous connectivity information is either invalid or
incomplete. When you use the Administrator, you will need to
re-register all of your repositories.
Use the following connection information guidelines:
Database
Ty p e
All typesRepository nameAny logical name.
OracleService
DB2DatasourceDatasource name as configured in DB2 client configuration
SybaseDatabase nameDatabase instance name.
Microsoft
SQL Server
InformixServer nameThe name of the machine on which the server is running.
Connection
FieldDescription
Machine nameMachine name on which the database server is running
PortPort on which the database server is listening. The Administrator
provides default values; if you do not know the port number, use
the default.
The database instance name to connect to. Depending on
name/SID
Database nameDatabase instance name.
Database nameThe name of the database instance running on the Server.
whether you are using an 8.0.2 or 8i database, it is either the SID
or the Service name.
utility. This is the same as in previous Administrator or Designer
connections.
Administrator Users
Administrator user configuration information in ActaWorks
version 5.x was saved in a primary repository. When you upgrade
to Data Integrator version 6.x, this information is lost. You must
reconfigure your Administrator users manually.
54 Data Integrator Release Notes
Data Integrator Release Notes
Migration considerations
Web services support
When using Data Integrator with Web services, the following
limitations exist:
•SAP R/3 IDoc message sources in real-time jobs are not
supported
•The Data Integrator Web services server uses the first Job
Server in the list of those available for a batch job. The first
Job Server might be a server group.
•If you are using Web services and upgrading to version
6.5.0.1, Web services server (call-in) functionality that you
created with previous versions of Data Integrator is not
supported. Perform the following changes to manually
upgrade the Web services server:
!Regenerate a WSDL file
Prior to release 6.5.0.1, every batch job and real-time
service configured in Data Integrator was published in
the WSDL file. In release 6.5.0.1, Data Integrator only
publishes batch jobs and real-time services that you
select. Clients that call Data Integrator using Web services
will fail until you authorize the jobs and services being
invoked. See “Building a WSDL file” on page 101 of the
Data Integrator Release Summary for instructions about
how to select jobs, enable security, and generate a WSDL
file for a Web services server.
!Change SOAP calls to match those in the 6.5.0.1 version.
A session ID is not included in SOAP messages unless
you enable security for the Web services server. Prior to
release 6.5.0.1, the
sessionID message part was published
in every SOAP message.
To fix this problem, change SOAP calls to match those in
the 6.5.0.1 version of the WSDL file.
•If security is off, remove the
sessionID line from the
SOAP input message.
Data Integrator Release Notes55
Data Integrator Release Notes
Migration considerations
NOTE: When you install 6.5.0.1, the installer automatically
upgrades the Web Services Adapter and your existing Web
services clients will function properly.
LiveLoad support
The LiveLoad feature does not save data from a master database
to then be applied (reconciled) to a mirror database. Only the
switching capability from "live" to "load" is supported. To
obtain the same results achieved in previous releases, design
your flows so that the data loaded into the load database begins
from the last time it was loaded. For example, on LiveLoad
databases that are loaded and switched daily, load two days
worth of data in each run.
•If security is on, copy the sessionID from the SOAP
body to the SOAP header as a session ID now
appears in the SOAP header instead of the body.
Sybase bulk loader library on UNIX
The installed Sybase client on UNIX platforms does not include
the Sybase bulk loader dynamic shared library that Data
Integrator requires. Please refer to the README in either the
CDROM Location/unix/PLATFORM/AWSybase directory or the
$LINK_DIR/AWSybase directory for instructions about how to:
•Confirm that the library is available in the Sybase client
installation
•Build the library
56 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
Documentation corrections
Please note the following last-minute corrections to the Data
Integrator 6.5.0.1 manuals. These changes will be included in
the next release of the manual set:
•Administrator Guide
•Designer Guide
•Reference Guide
•Supplement for J.D. Edwards
Administrator Guide
Page 11Chapter 2, “Administrator User Interface” — Under the
“Navigation” section under “Navigation tree”, change the first
sentence to the following:
The navigation tree is divided into six nodes:
Web S er vices, Adapters, Server Groups, and Management.
Batch, Real-Time,
Page 13Chapter 2, “Administrator User Interface” — Under the
“Navigation” section under “Navigation tree”, add the following
section after the “Real-time node” section:
Web S er vices node
Use this node to select real-time and batch jobs that you want to
publish as Web Service operations and to monitor the status of
those operations. You can also use the node to set security for
jobs published as Web Service operations and view the WSDL
file that Data Integrator uses as a Web services operations server.
For more information, see Chapter 5, “Support for Web
services,” in the Data Integrator Release Summary.
Page 27Chapter 3, “Administrator Management” — Delete the “Support
for Web Services” section.
Data Integrator Release Notes57
Data Integrator Release Notes
Documentation corrections
The node for Web services in the navigation tree of the
Administrator has been moved from under the Management
node up one level. The Web Services node is now one of the first
level nodes in the navigation tree.
Page 115Chapter 9, “Support for Web services” — Has been rewritten and
added to the Release Summary for Data Integrator version
6.5.0.1. See Chapter 5, “Support for Web services,” in the Data
Integrator Release Summary.
Designer Guide
Chapter 3, “Designer User Interface” — Under the section
“Saving and deleting objects,” replace “To save all changed
objects in the repository” with the following:
Page 581. Choose Project > Save All.
Data Integrator lists the reusable objects that have been
changed since the last save operation.
2. (optional) De-select any listed object to avoid saving it.
3. Click OK.
NOTE: Data Integrator also prompts you to save all objects
that have changes when you execute a job and when you exit
the Designer. Saving a reusable object saves any single-use
object included in it.
Chapter 4, “Projects and Jobs” — Under the section “Saving
Projects,” replace “To save all changes to a project” with the
following:
Page 721. Choose Project > Save All.
Data Integrator lists the jobs, work flows, and data flows that
you edited since the last save.
2. (optional) De-select any listed object to avoid saving it.
58 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
3. Click OK.
NOTE: Data Integrator also prompts you to save all objects
that have changes when you execute a job and when you exit
the Designer. Saving a reusable object saves any single-use
object included in it.
Page 160Chapter 7, “Data flows” — Under the section “To specify that a
batch job executes a data flow one time,” add the following to
the end of the first paragraph:
For more information about how Data Integrator processes data
flows with multiple conditions like execute once, parallel flows,
and recovery, see “Data flow” on page 32 of the Data Integrator
Reference Guide.
Page 188Chapter 8, “Work flows” — Under the section “To specify that a
batch job executes the work flow one time,” add the following to
the end of the first paragraph:
For more information about how Data Integrator processes work
flows with multiple conditions like execute once, parallel flows,
and recovery, see “Work flow” on page 142 of the Data Integrator
Reference Guide.
Page 311Chapter 12, “Variables and Parameters” — Under the section
“Local and global variable rules,” add the following to the end of
the section:
For information about how Data Integrator processes variables
in work flows with multiple conditions like execute once,
parallel flows, and recovery, see “Work flow” on page 142 of the
Data Integrator Reference Guide.
Data Integrator Release Notes59
Data Integrator Release Notes
Documentation corrections
Page 330Chapter 13, “Executing Jobs” — Under the section “Changing
Job Server options,” add the following to the Job Server options
table:
OptionsOption Description
Display DI Internal JobsDisplays Data Integrator’s internal datastore
CD_DS_d0cafae2 and its related jobs in the object library.
The CD_DS_d0cafae2 datastore supports two internal
jobs. The first calculates usage dependencies on repository
tables and the second updates server group configurations.
If you change your repository’s password, user name, or
other connection information, change this option’s default
value to TRUE, close and reopen the Designer, then update
the CD_DS_d0cafae2 datastore configuration to match
your new repository configuration. This enables the
calculate usage dependency job (CD_JOBd0cafae2) and
the server group job (di_job_al_mach_info)
without a connection error.
UseDomainNameAdds a domain name to a Job Server name in the
respository. This creates a fully qualified server name. This
allows the Designer to locate a Job Server on a different
domain.
to run
Default
Va l u e
FALSE
TRUE
Add the following to the table under step 3 in the “To change
option values for a given Job Server” section:
Page 396Chapter 15, “Design and Debug” — Under the section “View
Data pane”, add the following after the last bullet:
•Allows you to flag a row that you do not want the next
transform to process.
60 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
To discard a row from the next step in a data flow process,
select it and click
Discarded row data appears in the Strikethrough style in the
View Data pane (100345
If you discard a row accidentally, you can undo the discard
immediately afterwards. Select the discarded row and click
Undo Discard Row.
Discard Row.
).
Alternatively, right-click a row and select either
or
Undo Discard Row from the shortcut menu.
Discard Row
Page 419Chapter 17, “Metadata Reporting” — Under “Viewing metadata
reports” add the following as the first bullet:
•From a Windows computer, select
Business Objects Data Integrator version > Metadata Reports
Start > Programs >
Page 431Chapter 17, “Metadata Reporting” — Under “Column Mapping”
in the “Metadata analysis categories” section add the following:
Data Integrator Release Notes61
Data Integrator Release Notes
Documentation corrections
If the target column is mapped from a column in a nested
(NRDM) source schema, the target column is nested, or an
expression in the data flow references a nested column, the value
for the target column name is
"__DI_NESTED_COLNAME_ n”. In this
case, create a custom report to trace column linage. For
information about how Data Integrator stores linage
information after processing nested data, see “Storing nested
column-mapping data” in the Documentation corrections
section.
Page 456Chapter 17, “Metadata Reporting” — Under the section “Refresh
report content,” add the following after the third paragraph:
NOTE: If you change configuration settings for your repository,
you must also change the internal datastore configuration for the
calculate usage dependencies operation. For more information,
see “Display DI Internal Jobs” under “Changing Job Server
options” on page 331.
Page 456Chapter 17, “Metadata Reporting” — Under “Refresh report
content” in the “Metadata analysis categories” section replace the
fourth paragraph with the following:
Calculate Column Mappings option populates the internal
The
ALVW_MAPPING view and the AL_COLMAP_NAMES table. The
ALVW_MAPPING view provides current data to Metadata
Reports. If you need to generate a report about a data flow that
processes nested (NRDM) data, query the AL_COLMAP_NAMES
table using a custom report. For more information, see “Storing
nested column-mapping data” in the Documentation
corrections section.
Page 465Chapter 18, “Recovery Mechanisms” — Under the section
“Marking recovery units,” add the following to the end of the
second paragraph:
62 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
For more information about how Data Integrator processes data
flows and work flows with multiple conditions like execute once,
parallel flows, and recovery, see “Data flow” on page 32 of the
Data Integrator Reference Guide and “Work flow” on page 142 of the Data Integrator Reference Guide.
Reference Guide
Page 33Chapter 2, “Objects” — In the “Data flow” section, add the
following bullets after the paragraph that introduces the
only once
Execute
option.
•Note that if you design a job to execute the same
only once
data flow, in parallel flows, Data Integrator only
Execute
executes the first occurrence of that data flow and you cannot
control which one is executed first. Subsequent flows wait
until the first is processed. The engine provides a wait
message for each subsequent data flow. Since only one
Execute only once data flow can execute in a single job, the
engine skips subsequent data flows and generates a second
trace message for each, “Data flow n did not run more than
one time. It is an execute only once flow."
•Note that the Execute only once data flow option overrides
a work flow’s Recover as a unit or job’s Enable recovery
option. For example, if you design a job to execute more
than one instance of the same
Execute only once data flow
and execute the job in recovery mode, if the job fails Data
Integrator checks to see if the data flow successfully ran. If
any instance has run successfully, Data Integrator displays
the following trace message, “Data flow n recovered
successfully from previous run.” If no instance ran
successfully, Data Integrator executes the data flow in the
next run and skips subsequent instances of that data flow.
Page 33Chapter 2, “Objects” — In the “Targets” section, add the
following after the section for the
Use input keys option.
Data Integrator Release Notes63
Data Integrator Release Notes
Documentation corrections
Update key columns
This option is deselected by default which means that Data
Integrator does not update key columns. If you select this
option, Data Integrator updates key column values when it loads
data to the target.
Page 132Chapter 2, “Objects” — in the “Target” section, in the Teradata
target table options, the only
replace ("none" is not an option and was removed). Also, the
Mode description should specify the following:
NOTE: Regardless of which Teradata bulk loading method you
choose:
•In append mode, Data Integrator will not truncate the target
table.
•In replace mode, Data Integrator will explicitly truncate the
target table prior to executing the Teradata load script. Data
Integrator truncates the table by issuing a "truncate table"
SQL statement.
Mode options are append and
•When you supply custom loading scripts (FastLoad, TPump,
or MultiLoad), Data Integrator will not parse or modify
those scripts.
Page 143Chapter 2, “Objects” — In the “Work flow” section, add the
following bullets after the paragraph that introduces the
only once
•If you design a job to execute the same
64 Data Integrator Release Notes
option.
Execute only once
work flow in parallel flows, Data Integrator only executes the
first occurrence of the work flow in the job. You cannot
control which parallel work flow executes first. Subsequent
flows wait until the first is processed. The engine provides a
wait message for each subsequent work flow. Since only one
Execute only once work flow can execute in a single job, the
engine skips subsequent work flows and generates a second
trace message for each, “Work flow n did not run more than
one time. It is an execute only once flow."
Execute
Data Integrator Release Notes
Documentation corrections
•If you design a job to execute more than one instance of the
same
manage the values of output variables. Data Integrator only
processes one such work flow per job. Subsequent instances
of the work flow do not run and do not affect the values of
variables in the job.
•The Execute only once work flow option overrides a work
flow’s Recover as a unit or job’s Enable recovery option.
For example, if you design a job to execute more than one
instance of the same
execute the job in recovery mode, if the job fails Data
Integrator checks the work flow to see if it successfully ran.
!If any instance of the work flow ran successfully, Data
!If no instance ran successfully, Data Integrator executes
Execute only once work flow, it is your responsibility to
Execute only once work flow and
Integrator displays the following trace message, “Work
flow n recovered successfully from previous run.” Also, if
the work flow outputs a variable, the variable value is
preserved and used as designed when the job completes.
the work flow in the next run and skips subsequent
instances of that work flow.
Page 533Chapter 8, “Metadata in Repository Tables and Views” — Replace
the existing “ALVW_MAPPING” section with the following:
ALVW_MAPPING
The ALVW_MAPPING view joins the AL_COLMAP and the
AL_COLMAP_TEXT tables. These tables contain information
about target tables and columns, the sources used to populate
target columns, and the transforms Data Integrator applies to
sources before applying them to targets. Data Integrator uses the
ALVW_MAPPING view for impact analysis in Metadata Reports.
To ensure mappings are current, generate them. See “Refresh
report content” on page 456 of the Data Integrator Designer Guide.
The column mapping calculation generates the following
information for target columns:
Data Integrator Release Notes65
Data Integrator Release Notes
Documentation corrections
•The source column(s) from which the target column is
mapped.
•The expressions used to populate target columns.
Data Integrator stores column mappings of nested source and
target data in data flows using both the ALVW_MAPPING view
and the AL_COLMAP_NAMES table. For more information, see
“Storing nested column-mapping data” on page 70.
ALVW_MAPPING view
Column NameData typeDescription
DF_NAMEvarchar(64)Data flow that populates the target table.
TRG_TAB_NAMEvarchar(64)Name of the target table.
TRG_TAB_IDintID for this table within the repository.
TRG_TAB_DESCvarchar(100)Description of the target table.
TRG_OWNERvarchar(64)Owner of the target table.
TRG_DSvarchar(64)Datastore of the target table.
TRG_TYPEvarchar(64)Type of target. Examples: table, BW transfer structure.
TRG_USAGEvarchar(65)Usage of the target table. Examples: fact, dimension, lookup.
Currently set to NULL.
TRG_COL_NAMEvarchar(65)Column name in the target.
TRG_COL_IDintID for this column in the repository.
TRG_COL_DESCvarchar(100)Description of this column.
SRC_TAB_NAMEvarchar(64)Name of the source table used to populate the target.
SRC_TAB_IDintID for this table within the repository.
SRC_TAB_DESCvarchar(100)Description of the source table.
SRC_OWNERvarchar(64)Owner of the source table.
SRC_DSvarchar(64)Datastore of the source table.
SRC_TYPEvarchar(64)Type of source. Examples: table, file.
SRC_COL_NAMEvarchar(65)Name of the source column.
SRC_COL_IDintID for this column in the repository.
SRC_COL_DESCvarchar(100)Description of this column.
MAPPING_TYPEvarchar(65)Types of source to target mapping. Examples: direct,
computed, lookup.
MAPPING_TEXTvarchar(255)The expression used to map the source to the target column.
66 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
Example use case
The following query returns target tables and columns populated
from the column EMPID in table EMP (in datastore HR):
SELECT TRG_TAB_NAME, TRG_COL_NAME
FROM ALVW_MAPPING
WHERE SRC_TAB_NAME = 'EMP'
Mapping types
The AL_COLMAP_TEXT table contains information qualifying
the mapping relationships. This information, stored in the
MAPPING_TYPE column, can have the following values:
Mapping
TypeDescription
DirectThe target column is mapped directly from a source
ComputedThere is an expression associated with the target column.
GeneratedThere is no source column associated with the target
LookedUpA lookup function is used in the expression.
MergedTwo data streams are merged to populate the target table.
Not mappedThe column in the target table is not being populated by
UnknownData Integrator is unable to identify the expression used
AND SRC_COL_NAME = 'EMPID'
AND SRC_DS = 'HR'
column with no expression to transform it.
For example, EMPID (employee ID) mapped directly
from source to target.
For example, NAME is
LAST_NAME||’,’||FIRST_NAME.
column.
For example, the target table is mapped to a constant or a
function, such as sysdate, or is obtained from a transform,
such as Date_Generation.
The two expressions mapped to the target table are
separated by AND.
the data flow.
to map the target column. This happens only under
unusual error conditions.
Data Integrator Release Notes67
Data Integrator Release Notes
Documentation corrections
How mappings are computed
When a data flow processes information, it performs potentially
complex transformations on data in preparation for loading it
into one or more target tables. Typical operations include:
•Reading data from the appropriate sources
•Processing data using query transforms or other transforms
•Splitting the data stream and then merging it again
Consider the following example, where two transformations
operate against a value from one column of a source table.
Data Flow
Price
LocalPrice
SourceTar getTransform 1Transform 2
LocalPrice x 112
maps to TotalValue
The information is captured in AL_COLMAP_TEXT as follows:
Target columnSource columnMapping expression
Total_valuePric e((Price x 1.17) x 112)
This kind of information becomes more valuable as
transformation complexity increases. Consider the following
example:
•Data flow DF_1 reads three columns (a, b, c) from source
table S.
•Table S is connected to a query transform Q1.
•The query transform output has four columns (Qa, Qb, Qc,
and Qd) whose mapping expressions are S.a, S.b, S.c and S.a
– S.b.
•The output of Q1 is connected to query transform Q2, which
has two columns Q2y and Q2z whose expressions are Qa –
Qb and Qc – Qd.
To ta lV al u ePrice x 1.17 maps to
68 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
•The output of Q2 is loaded into target table T, which has two
columns: T1 and T2.
The mapping expressions for target columns T1 and T2 are
computed by starting from the end point (the target object) and
“walked” back through the list of transforms, with columns of a
transform written in terms of expressions from the previous
transform.
When processing is started on data flow DF_1, it starts with
column T1 of target table T.
The expression for T1 is Q2y, which in turn is A – dB, which can
be written as S.a – S.b. Therefore the mapping expression is S.a –
S.b and column T1 has two source columns—it is mapped from
S.a and S.b. The AL_COLMAP table contains two rows for the
target column to describe the two source columns.
In the case of T2, it is mapped from DC – JD, which can be
written as S.c – (S.a – S.b). In this case, there are three rows for
this target column in the AL_COLMAP table, one for each source
column.
Mapping complexities
If a data flow calls another data flow and then loads a target
table, the mappings are expressed in terms of the tables and
columns used within the other data flow. Information is
generated by “drilling down” into the other data flow to
continue the mapping process.
The situation in which the Merge transform is used within a data
flow is a bit more complex, because when two data streams are
merged, there are two ways to populate a target table. This
possibility is captured by separating the mapping expressions
with the keyword AND. For example, a target column could be
populated from S.a AND R.a.
Transforms like Hierarchy_Flattening and Pivot also introduce
complexities in the way columns are mapped.
Data Integrator Release Notes69
Data Integrator Release Notes
Documentation corrections
It is also possible that some target columns are mapped by
constants or expressions that do not use source columns. In this
case there will be no rows in the AL_COLMAP table for the target
column. The mapping expression in the AL_COLMAP_TEXT
table will reflect this.
If a target column is populated with a call to the lookup
function, then its source columns are both the looked up
column and the key used to do the lookup.
Storing nested column-mapping data
Data Integrator calculates column mappings (identifies the
source column(s) and expressions in use to map the target
column) for all data flows including those that use nested data.
The following objects and conditions are supported:
•XML files or messages
•IDOC files or messages
•Custom and adapter functions
•SAP R/3 and PeopleSoft hierarchies
•Column mappings that perform nesting or un-nesting
(target columns mapped from a nested or un-nested data
set)
•Nested columns used as parameters in custom or adapter
functions (including SAP R/3 RFC output parameters, BAPI
function calls, and database stored procedures)
•Embedded data flows
•R/3 data flows
•Correlated columns
You can map a column in a nested schema from a column in
the input schema of the nested schema, or from a column in
the input schema of the parent (or any ancestor) of the
nested schema. If you map a column from an ancestor, the
column is correlated.
70 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
Transforms support nested column-mapping data as follows:
•Query transforms process nested data and mappings are
stored in Data Integrator repository tables and views
•Data Integrator allows nested column mappings to pass
through the Merge, Case, and Map_Operation transforms
•Other transforms do not process nested data.
Nested (NRDM) notations that represent column names are
longer than those used for a flat schema column.
•A column in a flat schema is represented by
Table.Column, for example, "mats_emp.empno". Note
that Table may represent a database table, a file, an XML
message or file, an IDOC message or file, and so on.
•A column in a nested schema is represented by
Table.Subschema1…SubschemaN.Column
For example, "personnel.name.given" represents a column of
a nested schema which has three components. The first
component is the Table. The last component is the
Column. The middle components identify the nested levels
in the Table.
Because the TRG_COL_NAME and SRC_COL_NAME columns in
the repository’s ALVW_MAPPING view are VARCHAR (65) and
not big enough to store long NRDM column names, Data
Integrator uses the AL_COLMAP_NAMES table to support
nested data.
Data Integrator Release Notes71
Data Integrator Release Notes
Documentation corrections
AL_COLMAP_NAMES table
Column
NameData typeDescription
DF_NAMEvarchar(64)Data flow with that populates a target
COL_IDvarchar(65)Data Integrator generates this value using
COL_NAMEvarchar(255)If Data Integrator generates a COL_ID
SEQNUMintData Integrator generates this value if more
table.
the following format when a nested
column is encountered.
"__DI_NESTED_COLNAME_ n”
where n is an integer that starts from 1
value, this column stores the original
nested column name.
than one set of 255 characters is required
to store data in the COL_NAME column.
For each set of 255 characters, it generates
a new row and a sequence number.
The AL_COLMAP_NAMES table uses the DF_NAME, COL_ID,
SEQNUM columns as primary keys. The DF_NAME and COL_ID
columns are keyed to the following columns in the
ALVW_MAPPINGS view.
•DF_Name is keyed to DF_Name.
•COL_ID is keyed to SRC_COL_NAME and TRG_COL_NAME
The AL_COLMAP_NAMES table also provides an internal
mapping mechanism from COL_ID column to COL_NAME.
72 Data Integrator Release Notes
Data Integrator Release Notes
Documentation corrections
For example, if a source column BOOKS.AUTHOR.FIRST_NAME
is mapped into a target column BOOK.AUTHOR_NAME (an
un-nesting is probably in place), you can create a report to query
the following column values in the repository:
The TRG_COL_NAME or SRC_COL_NAME columns in the
ALVW_MAPPING view store the COL_ID, if the target or source
column is nested. To get the actual column name, lookup the
AL_COLMAP_NAMES table using the DF_Name, COL_ID, and
COL_NAME.
Flat or un-nested target or source column names are stored using
the format Column in TRG_COL_NAME and SRC_COL_NAME.
For example, of the three source columns shown below, only the
second one is nested:
SRC_COL_NAME
EMPNO
_DI_Nested_Names_1
ENAME
The second value is the only one for which Data Integrator
generates a column ID. To find this source column’s real name,
create a report that looks up its COL_NAME from the
AL_COLMAP_NAMES table.
Data Integrator Release Notes73
Data Integrator Release Notes
Documentation corrections
Supplement for J.D. Edwards
Page 4Chapter 1, under “System Requirements” — Update the JDE
OneWorld section with:
The J.D. Edwards interface supports OneWorld version B7.3 and
compatible versions, including Xe. Data Integrator supports this
application if OneWorld uses one of three underlying databases:
• DB2 for AS/400 (use DETAIL_DB2 or an ODBC datastore
connection using IBM’s iSeries Access)
• Microsoft SQL Server
• Oracle
74 Data Integrator Release Notes
Data Integrator Release Notes
Product end-of-life
Product end-of-life
Business Objects encourages customers to update their software
regularly to take advantage of exciting new features. As Business
Objects releases new versions of software, it becomes necessary
to end product support for older versions. Business Objects’
software version support for Data Integrator incorporates the
following end-of-life policy:
•Previous product versions 5.1 and earlier are no longer
maintained.
•Product version 5.2.0 will not be maintained beyond
February 28, 2004.
The most current product releases are versions 6.0, 6.1., and 6.5.
Customers who remain on unsupported versions of Data
Integrator may continue to purchase Support and Maintenance
by paying the appropriate support fees. However, if customers
request bug fixes for unsupported software versions, they will be
charged for the associated time and materials. Customers who
have paid support fees may request the software updates to
which they are entitled as a result of such payments.
Software updates are available from Business Objects customer
support web site. Registered customer support contacts with
valid user names and passwords can access this service using the
following steps:
1. Go to www.techsupport.businessobjects.com.
2. Click the Enterprise 6 tab.
3. Click Downloads and select Data Integration.
Select the release that you would like to download or order and
follow the instructions.
Data Integrator Release Notes75
Data Integrator Release Notes
Documentation bill of materials
Documentation bill of materials
The following two tables list Data Integrator books on the
product CD:
NOTE: You must use Adobe Acrobat 5.0 or higher to read
documents on this CD.
Install ButtonItemVersion
Technica l
Manuals
Data Integrator Technical Manuals (DITechnicalManuals.pdf)
Includes the following documents:
•Data Integrator Getting Started Guide (DIGettingStartedGuide.pdf)
Introduces Data Integrator and contains installation procedures.
•Data Integrator Designer Guide (DIDesignerGuide.pdf)
Provides usage information for Data Integrator Designer.
•Data Integrator Administrator Guide (DIAdministratorGuide.pdf)
Provides usage information for Data Integrator Administrator.
•Data Integrator Reference Guide (DIReferenceGuide.pdf)
Provides detailed reference material for Data Integrator Designer.
•Data Integrator Performance Optimization Guide (DIPerformanceGuide.pdf)
Provides advanced and performance tuning information for Data Integrator.
•Data Integrator Advanced Development and Migration Guide
(DIAdvDev_MigrationGuide.pdf) Provides guidelines and options for
migrating applications including information on multi-user functionality
and the use of the central repository for version control.
•Data Integrator Supplement for Oracle Applications
(DISupplement_OracleApps.pdf) Provides information about the license
controlled interface between Data Integrator and Oracle Applications.
•Data Integrator Supplement for J.D. Edwards (DISupplement_JDE.pdf)
Provides information about the license-controlled interfaces between Data
Integrator, J.D. Edwards World, and J.D. Edwards OneWorld.
•Data Integrator Supplement for PeopleSoft (DI
Provides information about the license-controlled interface between Data
Integrator and PeopleSoft.
•Data Integrator Supplement for SAP R/3 (DISupplement_SAP.pdf)
Provides information about license-controlled interfaces between Data
Integrator, SAP R/3, and SAP BW.
•Data Integrator Supplement for Siebel (DISupplement_Siebel.pdf)
Provides information about the license-controlled interface between Data
Integrator and Siebel.
•LiveLoad User’s Guide for Data Integrator (DILiveLoadGuide.pdf)
Provides conceptual and implementation information for maintaining a
single data warehouse that is always accessible.
Supplement_PeopleSoft.pdf
)
6.5
6.1
76 Data Integrator Release Notes
Data Integrator Release Notes
Documentation bill of materials
Other Data Integrator stand-alone documents include:
Install ButtonItemVersion
About This
Release
Customer
Support Guide
Data Integrator Release Notes (DIReleaseNotes.pdf)
Describes fixed and known problems in this release. This document.
Data Integrator Release Summary (DIReleaseSummary.pdf)
Provides an overview of the new features in this release.
Global Customer Support Guide (DICustomerSupportGuide.pdf)
The information you need to contact Business Objects Customer Support and to
interact effectively with them. To access the latest Support Guide go to:
Data Integrator Adapter for HTTP User’s Guide (HTTP Adapter User Guide.doc)
Explains how to configure the HTTP adapter, which is installed with every Job
Server. The HTTP adapter allows you to transfer data between Data Integrator and
external applications using the HTTP or HTTPS protocols.
Data Integrator Core Tutorial (DICoreTutorial.pdf)
A step-by-step introduction to using the Data Integrator product.
Data Integrator Adapter Software Development Kit User’s Guide (AdapterSDK.pdf)
Describes guidelines for creating adapters, which integrate Data Integrator with
“information sources”. Includes installation instructions.
BusinessObjects Universal Metadata Bridge Guide
(universal_metadata_bridge_guide.pdf)
Describes the Universal Metadata Bridge application and provides instructions
about how to create a universe using XML sources.
RapidMart Development Guide (RapidMartSDK.pdf)
Describes how to develop and maintain a RapidMart.
TRADEMARKS
The Business Objects logo, BusinessObjects and Rapid Marts are registered trademarks of Business Objects S.A. in
the United States and/or other countries.
Microsoft, Windows, Windows NT, Access, and other names of Microsoft products referenced herein are either
registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Oracle is a registered trademark of Oracle Corporation. All other names of Oracle products referenced herein are
trademarks or registered trademarks of Oracle Corporation.
DETAIL is a trademark of Striva Technology Ltd.
SAP, R/3, BW, ALE/WEB and ABAP/4 are the registered or unregistered trademarks of SAP AG.
PeopleSoft is a registered trademark of PeopleSoft, Inc.
All other product, brand, and company names mentioned herein are the trademarks of their respective owners.
USE RESTRICTIONS
This software and documentation is commercial computer software under Federal Acquisition regulations, and is
provided only under the Restricted Rights of the Federal Acquisition Regulations applicable to commercial computer
software provided at private expense. The use, duplication, or disclosure by the U.S. Government is subject to
restrictions set forth in subdivision (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at
252.227- 7013.
78 Data Integrator Release Notes
Data Integrator Release Notes
Copyright information
SNMP-specific copyright information
Portions Copyright 1989, 1991, 1992 by Carnegie Mellon University
Portions Derivative Work - 1996, 1998-2000
Portions Copyright 1996, 1998-2000 The Regents of the University of California
All Rights Reserved
Permission to use, copy, modify and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appears in all copies and that both that copyright notice and this permission notice appear in
supporting documentation, and that the name of CMU and The Regents of the University of California not be used in advertising or
publicity pertaining to distribution of the software without specific written permission.
CMU AND THE REGENTS OF THE UNIVERSITY OF CALIFORNIA DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL CMU OR THE REGENTS OF
THE UNIVERSITY OF CALIFORNIA BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM THE LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Portions Copyright (c) 2001, Networks Associates Technology, Inc
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions
are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
Neither the name of the NAI Labs nor the names of its contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANT ABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Portions of this code are copyright (c) 2001, Cambridge Broadband Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions
are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
The name of Cambridge Broadband Ltd. may not be used to endorse or promote products derived from this software without specific
prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANT ABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Specifications subject to change without notice. Not responsible for errors or admissions.
Data Integrator Release Notes79
Data Integrator Release Notes
Copyright information
80 Data Integrator Release Notes
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.