INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY
ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN
INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS
ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES
RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER
INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications.
Intel may make changes to specifications and product descriptions at any time, without notice.
This Voice API Programming Guide as well as the software described in it is furnished under license and may only be used or copied in accordance
with the terms of the license. The information in this manual is furnished for informational use only, is subject to change without notice, and should not
be construed as a commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may
appear in this document or any software that may be provided in association with this document.
Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any
means without express written consent of Intel Corporation.
This revision history summarizes the changes made in each published version of this document.
Document No.Publication DateDescription of Revisions
05-2377-002June 2005Application Development Guidelines chapter : Added bullet about digits not always
being cleared by dx_clrdigbuf( ) in Tone Detection Considerations section [PTR
33806].
Call Progress Analysis chapter : Added eight new SIT sequences that can be
returned by ATDX_CRTNID( ) for DM3 boards in Types of Tones section.
Revised values of TID_SIT_NC (Freq of first segment changed from 950/1001 to
950/1020) and TID_SIT_VC (Freq of first segment changed from 950/1001 to
950/1020) in table of Special Information Tone Sequences (DM3); also added
four new SIT sequences to this table. Added note about SRL device mapper
functions in Steps to Modify a Tone Definition on DM3 Boards section.
Recording and Playback chapter : Added Recording with the Voice Activity Detector
section that describes new modes for dx_reciottdata( ).
Send and Receive FSK Data chapter: Updated Fixed-Line Short Message Service
(SMS) section to indicate that fixed-line short message service (SMS) is
supported on Springware boards. Updated Library Support on Springware
Boards section to indicate that Springware boards in Linux support ADSI two-
way FSK and SMS.
Cached Prompt Management chapter: Added sentence to second paragraph about
flushing cached prompts in Overview of Cached Prompt Management section.
Added second paragraph about flushing cached prompts in Cached Prompt
Management Hints and Tips section.
05-2377-001November 2004Initial version of document. Much of the information contained in this document was
previously published in the Voice API for Linux Operating System Programming
Guide (document number 05-1829-001) and the Voice API for Windows Operating
Systems Programming Guide (document number 05-1831-002).
This document now supports both Linux and Windows operating systems. When
information is specific to an operating system, it is noted.
Voice API Programming Guide — June 200511
Revision History
12Voice API Programming Guide — June 2005
About This Publication
The following topics provide information about this publication:
• Purpose
• Applicability
• Intended Audience
• How to Use This Publication
• Related Information
Purpose
This publication provides guidelines for building computer telephony applications on Windows*
and Linux* operating systems using the Intel
limited to, call routing, voice messaging, interactive voice response, and call center applications.
This publication is a companion guide to the Voice API Library Reference, which provides details
on the functions and parameters in the voice library.
®
voice API. Such applications include, but are not
Applicability
This document version (05-2377-002) is published for Intel® Dialogic® System Release 6.1 for
Linux operating system.
This document may also be applicable to later Intel Dialogic system releases, including service
updates, on Linux or Windows. Check the Release Guide for your software release to determine
whether this document is supported.
This document is applicable to Intel Dialogic system releases only. It is not applicable to Intel
NetStructure
documentation specific to HMP is provided. Check the Release Guide for your software release to
determine what documents are provided with the release.
®
Host Media Processing (HMP) software releases. A separate set of voice API
Intended Audience
This information is intended for:
• Distributors
• System Integrators
• Toolkit Developers
Voice API Programming Guide — June 200513
About This Publication
• Independent Software Vendors (ISVs)
• Value Added Resellers (VARs)
• Original Equipment Manufacturers (OEMs)
How to Use This Publication
This document assumes that you are familiar with and have prior experience with Windows or
Linux operating systems and the C programming language. Use this document together with the
following: the Voice API Library Reference, the Standard Runtime Library API Programming Guide, and the Standard Runtime Library API Library Reference.
The information in this guide is organized as follows:
• Chapter 1, “Product Description” introduces the key features of the voice library and provides
a brief description of each feature.
• Chapter 2, “Programming Models” provides a brief overview of supported programming
models.
• Chapter 3, “Device Handling” discusses topics related to devices such as device naming
concepts, how to open and close devices, and how to discover whether a device is Springware
or DM3.
• Chapter 4, “Event Handling” provides information on functions used to handle events.
• Chapter 5, “Error Handling” provides information on handling errors in your application.
• Chapter 6, “Application Development Guidelines” provides programming guidelines and
techniques for developing an application using the voice library. This chapter also discusses
fixed and flexible routing configurations.
• Chapter 7, “Call Progress Analysis” describes the components of call progress analysis in
detail. This chapter also covers differences between Basic Call Progress Analysis and
PerfectCall Call Progress Analysis.
• Chapter 8, “Recording and Playback” discusses playback and recording features, such as
encoding algorithms, play and record API functions, transaction record, and silence
compressed record.
• Chapter 9, “Speed and Volume Control” explains how to control speed and volume of
playback recordings through API functions and data structures.
• Chapter 10, “Send and Receive FSK Data” describes the two-way frequency shift keying
(FSK) feature, the Analog Display Services Interface (ADSI), and API functions for use with
this feature.
• Chapter 11, “Caller ID” describes the caller ID feature, supported formats, and how to enable
it.
• Chapter 12, “Cached Prompt Management” provides information on cached prompts and how
to use cached prompt management in your application.
• Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation”
describes these tone detection and generation features in detail.
• Chapter 14, “Global Dial Pulse Detection” discusses the Global DPD feature, the API
functions for use with this feature, programming guidelines, and example code.
14Voice API Programming Guide — June 2005
About This Publication
• Chapter 15, “R2/MF Signaling” describes the R2/MF signaling protocol, the API functions for
use with this feature, and programming guidelines.
that include a license for the Syntellect Technology Corporation (STC) patent portfolio.
• Chapter 17, “Building Applications” discusses compiling and linking requirements such as
include files and library files.
Related Information
See the following for more information:
• For details on all voice functions, parameters and data structures in the voice library, see the
Voice API Library Reference.
• For details on the Standard Runtime Library (SRL), supported programming models, and
programming guidelines for building all applications, see the Standard Runtime Library API
Programming Guide. The SRL is a device-independent library that consists of event
management functions and standard attribute functions.
• For details on all functions and data structures in the Standard Runtime Library (SRL) library,
see the Standard Runtime Library API Library Reference.
• For information on the system release, system requirements, software and hardware features,
supported hardware, and release documentation, see the Release Guide for the system release
you are using.
• For details on compatibility issues, restrictions and limitations, known problems, and late-
breaking updates or corrections to the release documentation, see the Release Update.
Be sure to check the Release Update for the system release you are using for any updates or
corrections to this publication. Release Updates are available on the Telecom Support
Resources website at http://resource.intel.com/telecom/support/releases/.
• For details on installing the system software, see the System Release Installation Guide.
• For guidelines on building applications using Global Call software (a common signaling
interface for network-enabled applications, regardless of the signaling protocol needed to
connect to the local telephone network), see the Global Call API Programming Guide.
• For details on all functions and data structures in the Global Call library, see the Global Call
API Library Reference.
• For details on configuration files (including FCD/PCD files) and instructions for configuring
products, see the Configuration Guide for your product or product family.
Voice API Programming Guide — June 200515
About This Publication
16Voice API Programming Guide — June 2005
1.Product Description
This chapter provides information on key voice library features and capability. The following
topics are covered:
The voice software provides a high-level interface to Intel telecom media processing boards and is
a building block for creating computer telephony applications. It offers a comprehensive set of
features such as dual-tone multifrequency (DTMF) detection, tone signaling, call progress analysis,
playing and recording that supports a number of encoding methods, and much more.
The voice software consists of a C language library of functions, device drivers, and firmware.
The voice library is well integrated with other technology libraries provided by Intel such as fax,
conferencing, and continuous speech processing. This architecture enables you to add new
capability to your voice application over time.
For a list of voice features by product, see the Release Guide for your system release.
1.2R4 API
The term R4 API (“System Software Release 4 Application Programming Interface”) describes the
direct interface used for creating computer telephony application programs. The R4 API is a rich
set of proprietary APIs for building computer telephony applications on Intel telecom products.
These APIs encompass technologies that include voice, conferencing, fax, and speech. This
document describes the voice API.
Voice API Programming Guide — June 200517
Product Description
In addition to original Springware products (also known as earlier-generation products), the R4
API supports a new generation of hardware products that are based on the DM3 mediastream
architecture. Feature differences between these two categories of products are noted.
DM3 boards is a collective name used in this document to refer to products that are based on the
DM3 mediastream architecture. DM3 board names typically are prefaced with “DM,” such as the
Intel NetStructure
architecture. Springware boards typically are prefaced with “D,” such as the Intel
D/240JCT-T1.
In this document, the term voice API is used to refer to the R4 voice API.
®
DM/V2400A. Springware boards refer to boards based on earlier-generation
1.3Call Progress Analysis
Call progress analysis monitors the progress of an outbound call after it is dialed into the Public
Switched Telephone Network (PSTN).
There are two forms of call progress analysis: basic and PerfectCall. PerfectCall call progress
analysis uses an improved method of signal identification and can detect fax machines and
answering machines. Basic call progress analysis provides backward compatibility for older
applications written before PerfectCall call progress analysis became available.
®
Dialogic®
Note: PerfectCall call progress analysis was formerly called enhanced call analysis.
See Chapter 7, “Call Progress Analysis” for detailed information about this feature.
1.4Tone Generation and Detection Features
In addition to DTMF and MF tone detection and generation, the following signaling features are
provided by the voice library:
• Global Tone Detection (GTD)
• Global Tone Generation (GTG)
• Cadenced Tone Generation
1.4.1Global Tone Detection (GTD)
Global tone detection allows you to define single- or dual-frequency tones for detection on a
channel-by-channel basis. Global tone detection and GTD tones are also known as user-defined tone detection and user-defined tones.
Use global tone detection to detect single- or dual-frequency tones outside the standard DTMF
range of 0-9, a-d, *, and #. The characteristics of a tone can be defined and tone detection can be
enabled using GTD functions and data structures provided in the voice library.
18Voice API Programming Guide — June 2005
See Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation” for
detailed information about global tone detection.
1.4.2Global Tone Generation (GTG)
Global tone generation allows you to define a single- or dual-frequency tone in a tone generation
template and to play the tone on a specified channel.
See Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation” for
detailed information about global tone generation.
1.4.3Cadenced Tone Generation
Cadenced tone generation is an enhancement to global tone generation. It allows you to generate a
tone with up to 4 single- or dual-tone elements, each with its own on/off duration, which creates the
signal pattern or cadence. You can define your own custom cadenced tone or take advantage of the
built-in set of standard PBX call progress signals, such as dial tone, ringback, and busy.
See Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation” for
detailed information about cadenced tone generation.
Product Description
1.5Dial Pulse Detection
Dial pulse detection (DPD) allows applications to detect dial pulses from rotary or pulse phones by
detecting the audible clicks produced when a number is dialed, and to use these clicks as if they
were DTMF digits. Global dial pulse detection, called global DPD, is a software-based dial pulse
detection method that can use country-customized parameters for extremely accurate performance.
See Chapter 14, “Global Dial Pulse Detection” for more information about this feature.
1.6Play and Record Features
The following play and record features are provided by the voice library:
• Play and Record Functions
• Speed and Volume Control
• Transaction Record
• Silence Compressed Record
• Streaming to Board
• Echo Cancellation Resource
Voice API Programming Guide — June 200519
Product Description
1.6.1Play and Record Functions
The voice library includes several functions and data structures for recording and playing audio
data. These allow you to digitize and store human voice; then retrieve, convert, and play this digital
information. In addition, you can pause a play currently in progress and resume that same play.
For more information about play and record features, see Chapter 8, “Recording and Playback”.
This chapter also includes information about voice encoding methods supported; see Section 8.5,
“Voice Encoding Methods”, on page 89. For detailed information about play and record functions,
see the Voice API Library Reference.
1.6.2Speed and Volume Control
The speed and volume control feature allows you to control the speed and volume of a message
being played on a channel, for example, by entering a DTMF tone.
Se Chapter 9, “Speed and Volume Control” for more information about this feature.
1.6.3Transaction Record
The transaction record feature allows voice activity on two channels to be summed and stored in a
single file, or in a combination of files, devices, and memory. This feature is useful in call center
applications where it is necessary to archive a verbal transaction or record a live conversation.
See Chapter 8, “Recording and Playback” for more information on the transaction record feature.
1.6.4Silence Compressed Record
The silence compressed record (SCR) feature enables recording with silent pauses eliminated. This
results in smaller recorded files with no loss of intelligibility.
When the audio level is at or falls below the silence threshold for a minimum duration of time,
silence compressed record begins. If a short burst of noise (glitch) is detected, the compression
does not end unless the glitch is longer than a specified period of time.
See Chapter 8, “Recording and Playback” for more information.
1.6.5Streaming to Board
The streaming to board feature allows you to stream data to a network interface in real time. Unlike
the standard voice play feature (store and forward), data can be streamed in real time with little
delay as the amount of initial data required to start the stream is configurable. The streaming to
board feature is essential for applications such as text-to-speech, distributed prompt servers, and IP
gateways.
For more information about this feature, see Chapter 8, “Recording and Playback”.
20Voice API Programming Guide — June 2005
1.6.6Echo Cancellation Resource
The echo cancellation resource (ECR) feature enables a voice channel to dynamically perform echo
cancellation on any external TDM bus time slot signal.
Note: The ECR feature has been replaced with continuous speech processing (CSP). Although the CSP
API is related to the voice API, it is provided as a separate product. The continuous speech
processing software is a significant enhancement to ECR. The continuous speech processing
library provides many features such as high-performance echo cancellation, voice energy detection,
barge-in, voice event signaling, pre-speech buffering, full-duplex operation and more. For more
information on this API, see the Continuous Speech Processing documentation.
See Chapter 8, “Recording and Playback” for more information about the ECR feature.
1.7Send and Receive FSK Data
The send and receive frequency shift keying (FSK) data interface is used for Analog Display
Services Interface (ADSI) and fixed-line short message service, also called small message service,
or SMS. Frequency shift keying is a frequency modulation technique to send digital data over
voiced band telephone lines. ADSI allows information to be transmitted for display on a displaybased telephone connected to an analog loop start line, and to store and forward SMS messages in
the Public Switched Telephone Network (PSTN). The telephone must be a true ADSI-compliant or
fixed line SMS-compliant device.
Product Description
See Chapter 10, “Send and Receive FSK Data” for more information on ADSI, FSK, and SMS.
1.8Caller ID
An application can enable the caller ID feature on specific channels to process caller ID
information as it is received with an incoming call. Caller ID information can include the calling
party’s directory number (DN), the date and time of the call, and the calling party’s subscriber
name.
See Chapter 11, “Caller ID” for more information about this feature.
1.9R2/MF Signaling
R2/MF signaling is an international signaling system that is used in Europe and Asia to permit the
transmission of numerical and other information relating to the called and calling subscribers’
lines.
R2/MF signaling is typically accomplished through the Global Call API. For more information, see
the Global Call documentation set. Chapter 15, “R2/MF Signaling” is provided for reference only.
Voice API Programming Guide — June 200521
Product Description
1.10TDM Bus Routing
A time division multiplexing (TDM) bus is a technique for transmitting a number of separate
digitized signals simultaneously over a communication medium. TDM bus includes the CT Bus
and SCbus.
The CT Bus is an implementation of the computer telephony bus standard developed by the
Enterprise Computer Telephony Forum (ECTF) and accepted industry-wide. The H.100 hardware
specification covers CT Bus implementation using the PCI form factor. The H.110 hardware
specification covers CT Bus implementation using the CompactPCI (cPCI) form factor. The CT
Bus has 4096 bi-directional time slots.
The SCbus or signal computing bus connects Signal Computing System Architecture (SCSA)
resources. The SCbus has 1024 bi-directional time slots.
A TDM bus connects voice, telephone network interface, fax, and other technology resource
boards together. TDM bus boards are treated as board devices with on-board voice and/or
telephone network interface devices that are identified by a board and channel (time slot for digital
network channels) designation, such as a voice channel, analog channel, or digital channel.
For information on TDM bus routing functions, see the Voice API Library Reference.
Note: When you see a reference to the SCbus or SCbus routing, the information also applies to the CT
Bus on DM3 products. That is, the physical interboard connection can be either SCbus or CT Bus.
The SCbus protocol is used and the TDM routing API (previously called the SCbus routing API)
applies to all the boards regardless of whether they use an SCbus or CT Bus physical interboard
connection.
22Voice API Programming Guide — June 2005
2.Programming Models
This chapter briefly discusses the Standard Runtime Library and supported programming models:
The Standard Runtime Library (SRL) provides a set of common system functions that are device
independent and are applicable to all Intel
event management functions, device management functions (called standard attribute functions),
and device mapper functions. You can use the SRL to simplify application development, such as by
writing common event handlers to be used by all devices.
When developing voice processing applications, refer to the Standard Runtime Library
documentation in tandem with the voice library documentation. For more information on the
Standard Runtime Library, see the Standard Runtime Library API Library Reference and Standard Runtime Library API Programming Guide.
®
telecom devices. The SRL consists of a data structure,
2.2Asynchronous Programming Models
Asynchronous programming enables a single program to control multiple voice channels within a
single process. This allows the development of complex applications where multiple tasks must be
coordinated simultaneously.
The asynchronous programming model uses functions that do not block thread execution; that is,
the function continues processing under the hood. A Standard Runtime Library (SRL) event later
indicates function completion.
Generally, if you are building applications that use any significant density, you should use the
asynchronous programming model to develop field solutions.
For complete information on asynchronous programming models, see the Standard Runtime Library API Programming Guide.
2.3Synchronous Programming Model
The synchronous programming model uses functions that block application execution until the
function completes. This model requires that each channel be controlled from a separate process.
This allows you to assign distinct applications to different channels dynamically in real time.
Voice API Programming Guide — June 200523
Programming Models
Synchronous programming models allow you to scale an application by simply instantiating more
threads or processes (one per channel). This programming model may be easy to encode and
manage but it relies on the system to manage scalability. Applying the synchronous programming
model can consume large amounts of system overhead, which reduces the achievable densities and
negatively impacts timely servicing of both hardware and software interrupts. Using this model, a
developer can only solve system performance issues by adding memory or increasing CPU speed
or both. The synchronous programming models may be useful for testing or very low-density
solutions.
For complete information on synchronous programming models, see the Standard Runtime Library API Programming Guide.
24Voice API Programming Guide — June 2005
3.Device Handling
This chapter describes the concept of a voice device and how voice devices are named and used.
The following concepts are key to understanding devices and device handling:
device
A device is a computer component controlled through a software device driver. A resource
board, such as a voice resource, fax resource, and conferencing resource, and network
interface board, contains one or more logical board devices. Each channel or time slot on the
board is also considered a device.
device channel
A device channel refers to a data path that processes one incoming or outgoing call at a time
(equivalent to the terminal equipment terminating a phone line). The first two numbers in the
product naming scheme identify the number of device channels for a given product. For
example, there are 24 voice device channels on a D/240JCT-T1 board, 30 on a D/300JCT-E1.
3
device name
A device name is a literal reference to a device, used to gain access to the device via an
xx_open( ) function, where “xx” is the prefix defining the device to be opened. For example,
“dx” is the prefix for voice device and “fx” for fax device.
device handle
A device handle is a numerical reference to a device, obtained when a device is opened using
xx_open( ), where “xx” is the prefix defining the device to be opened. The device handle is
used for all operations on that device.
physical and virtual boards
The API functions distinguish between physical boards and virtual boards. The device driver
views a single physical voice board with more than four channels as multiple emulated D/4x
boards. These emulated boards are called virtual boards. For example, a D/120JCT-LS with 12
channels of voice processing contains three virtual boards. A DM/V480A-2T1 board with 48
channels of voice processing and two T1 trunk lines contains 12 virtual voice boards and two
virtual network interface boards.
3.2Voice Device Names
The software assigns a device name to each device or each component on a board. A voice device is
named dxxxBn, where n is the device number assigned in sequential order down the list of sorted
voice boards. A device corresponds to a grouping of two or four voice channels.
Voice API Programming Guide — June 200525
Device Handling
For example, a D/240JCT-T1 board employs 24 voice channels; the software therefore divides the
D/240JCT into six voice board devices, each device consisting of four channels. Examples of board
device names for voice boards are dxxxB1 and dxxxB2.
A device name can be appended with a channel or component identifier. A voice channel device is
named dxxxBnCy, where y corresponds to one of the voice channels. Examples of channel device
names for voice boards are dxxxB1C1 and dxxxB1C2.
A physical board device handle is a numerical reference to a physical board. A physical board
device handle is a concept introduced in System Release 6.0. Previously there was no way to
identify a physical board but only the virtual boards that make up the physical board. Having a
physical board device handle enables API functions to act on all devices on the physical board. The
physical board device handle is named brdBn, where n is the device number. As an example, the
physical board device handle is used in cached prompt management.
Use the Standard Runtime Library device mapper functions to retrieve information on all devices in
a system, including a list of physical boards, virtual boards on a physical board, and subdevices on
a virtual board.
For complete information on device handling, see the Standard Runtime Library API Programming Guide.
26Voice API Programming Guide — June 2005
4.Event Handling
This chapter provides information on functions used to retrieve and handle events. Topics include:
An event indicates that a specific activity has occurred on a channel. The voice driver reports
channel activity to the application program in the form of events, which allows the program to
identify and respond to a specific occurrence on a channel. Events provide feedback on the
progress and completion of functions and indicate the occurrence of other channel activities. Voice
library events are defined in the dxxxlib.h header file.
For a list of events that may be returned by the voice software, see the Voice API Library Reference.
4.2Event Management Functions
4
Event management functions are used to retrieve and handle events being sent to the application
from the firmware. These functions are contained in the Standard Runtime Library (SRL) and
defined in srllib.h. The SRL provides a set of common system functions that are device
independent and are applicable to all Intel
management and event handling, see the Standard Runtime Library API Programming Guide.
Event management functions include:
• sr_enbhdlr( )
• sr_dishdlr( )
• sr_getevtdev( )
• sr_getevttype( )
• sr_getevtlen( )
• sr_getevtdatap( )
For details on SRL functions, see the Standard Runtime Library API Library Reference.
The event management functions retrieve and handle voice device termination events for functions
that run in asynchronous mode, such as dx_dial( ) and dx_play( ). For complete function reference
information, see the Voice API Library Reference.
®
telecom devices. For more information on event
Voice API Programming Guide — June 200527
Event Handling
Each of the event management functions applicable to the voice boards are listed in the following
tables. Table 1 lists values that are required by event management functions. Table 2 list values that
are returned for event management functions that are used with voice devices.
Table 1. Voice Device Inputs for Event Management Functions
Event Management
Function
sr_enbhdlr( )
Enable event handler
sr_dishdlr( )
Disable event handler
Voice Device
Input
evt_typeTDX_PLAYdx_play( )
evt_type As aboveAs above
Valid ValueRelated Voice Functions
TDX_PLAYTONEdx_playtone( )
TDX_RECORDdx_rec( )
TDX_GETDIGdx_getdig( )
TDX_DIALdx_dial( )
TDX_CALLPdx_dial( )
TDX_SETHOOKdx_sethook( )
TDX_WINKdx_wink( )
TDX_ERRORAll asynchronous functions
Table 2. Voice Device Returns from Event Management Functions
Event Management
Function
sr_getevtdev( )
Get device handle
sr_getevttype( )
Get event type
sr_getevtlen( )
Get event data length
sr_getevtdatap( )
Get pointer to event data
Return
Description
devicevoice device handle
event typeTDX_PLAYdx_play( )
event lengthsizeof (DX_CST)
event datapointer to DX_CST structure
Returned ValueRelated Voice Functions
TDX_PLAYTONEdx_playtone( )
TDX_RECORDdx_rec( )
TDX_GETDIGdx_getdig( )
TDX_DIALdx_dial( )
TDX_CALLPdx_dial( )
TDX_CSTdx_setevtmsk( )
TDX_SETHOOKdx_sethook( )
TDX_WINKdx_wink( )
TDX_ERRORAll asynchronous functions
28Voice API Programming Guide — June 2005
5.Error Handling
This chapter discusses how to handle errors that can occur when running an application.
All voice library functions return a value to indicate success or failure of the function. A return
value of zero or a non-negative number indicates success. A return value of -1 indicates failure.
If a voice library function fails, call the standard attribute functions ATDV_LASTERR( ) and
ATDV_ERRMSGP( ) to determine the reason for failure. For more information on these
functions, see the Standard Runtime Library API Library Reference.
If an extended attribute function fails, two types of errors can be generated. An extended attribute
function that returns a pointer will produce a pointer to the ASCIIZ string “Unknown device” if it
fails. An extended attribute function that does not return a pointer will produce a value of
AT_FAILURE if it fails. Extended attribute functions for the voice library are prefaced with
“ATDX_”.
Notes: 1. The dx_open( ) and dx_close( ) functions are exceptions to the above error handling rules. On
Linux, if these functions fail, the return code is -1, and the specific error is found in the errno
variable contained in errno.h. On Windows, if these functions fail, the return code is -1. Use
dx_fileerrno( ) to obtain the system error value.
2. If ATDV_LASTERR( ) returns the EDX_SYSTEM error code, an operating system error has
occurred. On Linux, check the global variable errno contained in errno.h. On Windows, use
dx_fileerrno( ) to obtain the system error value.
5
For a list of errors that can be returned by a voice library function, see the Voice API Library
Reference. You can also look up the error codes in the dxxxlib.h file.
Voice API Programming Guide — June 200529
Error Handling
30Voice API Programming Guide — June 2005
6.Application Development
Guidelines
This chapter provides programming guidelines and techniques for developing an application using
the voice library. The following topics are discussed:
The following considerations apply to all applications written using the voice API:
• Busy and Idle States
• Setting Termination Conditions for I/O Functions
• Setting Termination Conditions for Digits
• Clearing Structures Before Use
• Working with User-Defined I/O Functions
6
See feature chapters for programming guidelines specific to a feature, such as call progress
analysis, recording and playback, and so on.
6.1.1Busy and Idle States
The operation of some library functions are dependent on the state of the device when the function
call is made. A device is in an idle state when it is not being used, and in a busy state when it is
dialing, stopped, being configured, or being used for other I/O functions. Idle represents a single
state; busy represents the set of states that a device may be in when it is not idle. State-dependent
functions do not make a distinction between the individual states represented by the term busy.
They only distinguish between idle and busy states.
For more information on categories of functions and their description, see the Voice API Library Reference.
Voice API Programming Guide — June 200531
Application Development Guidelines
6.1.2Setting Termination Conditions for I/O Functions
When an I/O function is issued, you must pass a set of termination conditions as one of the function
parameters. Termination conditions are events monitored during the I/O process that will cause an
I/O function to terminate. When the termination condition is met, a termination reason is returned
by ATDX_TERMMSK( ). If the I/O function is running in synchronous mode, the
ATDX_TERMMSK( ) function returns a termination reason after the I/O function has completed.
If the I/O function is running in asynchronous mode, the ATDX_TERMMSK( ) function returns a
termination reason after the function termination event has arrived. I/O functions can terminate
under several conditions as described later in this section.
You can predict events that will occur during I/O (such as a digit being received or the call being
disconnected) and set termination conditions accordingly. The flow of control in a voice
application is based on the termination condition. Setting these conditions properly allows you to
build voice applications that can anticipate a caller's actions.
To set the termination conditions, values are placed in fields of a DV_TPT structure. If you set
more than one termination condition, the first one that occurs will terminate the I/O function. The
DV_TPT structures can be configured as a linked list or array, with each DV_TPT specifying a
single terminating condition. For more information on the DV_TPT structure, which is defined in
srllib.h, see the Voice API Library Reference.
The termination conditions are described in the following paragraphs.
byte transfer count
This termination condition applies when playing or recording a file with dx_play( ) or
dx_rec( ). The maximum number of bytes is set in the DX_IOTT structure. This condition will
cause termination if the maximum number of bytes is used before one of the termination
conditions specified in the DV_TPT occurs. For information about setting the number of bytes
in the DX_IOTT, see the Voice API Library Reference.
dx_stopch( ) occurred
The dx_stopch( ) function will terminate any I/O function, except dx_dial( ) (with call
progress analysis disabled) or dx_wink( ), and stop the device. See the dx_stopch( ) function
description for more detailed information about this function.
end of file reached
This termination condition applies when playing a file. This condition will cause termination if
-1 has been specified in the io_length field of the DX_IOTT, and no other termination
condition has occurred before the end of the file is reached. For information about setting the
DX_IOTT, see the Voice API Library Reference. When this termination condition is met, a
TM_EOD termination reason is returned from ATDX_TERMMSK( ).
loop current drop (DX_LCOFF)
This termination condition is not supported on DM3 boards using the voice library; however
support is available via call control API. For more information, see the Global Call Analog Technology User’s Guide.
In some central offices, switches, and PBXs, a drop in loop current indicates disconnect
supervision. An I/O function can terminate if the loop current drops for a specified amount of
time. The amount of time is specified in the tp_length field of a DV_TPT structure. The
amount of time can be specified in 100 msec units (default) or 10 msec units. 10 msec can be
32Voice API Programming Guide — June 2005
Application Development Guidelines
specified in the tp_flags field of the DV_TPT. When this termination condition is met, a
TM_LCOFF termination reason is returned from ATDX_TERMMSK( ).
maximum delay between digits (DX_IDDTIME)
This termination condition monitors the length of time between the digits being received. A
specific length of time can be placed in the tp_length field of a DV_TPT. If the time between
receiving digits is more than this period of time, the function terminates. The amount of time
can be specified in 100 msec units (default) or 10 msec units. 10 msec units can be specified in
the tp_flags field of the DV_TPT. When this termination condition is met, a TM_IDDTIME
termination reason is returned from ATDX_TERMMSK( ).
On DM3 boards, this termination condition is only supported by the dx_getdig( ) function.
maximum digits received (DX_MAXDTMF)
This termination condition counts the number of digits in the channel's digit buffer. If the
buffer is not empty before the I/O function is called, the digits that are present in the buffer
when the function is initiated are counted as well. The maximum number of digits to receive is
set by placing a number from 1 to 31 in the tp_length field of a DV_TPT. This value specifies
the number of digits allowed in the buffer before termination. When this termination condition
is met, a TM_MAXDTMF termination reason is returned from ATDX_TERMMSK( ).
maximum length of non-silence ((DX_MAXNOSIL)
This termination condition is not supported on DM3 boards.
Non-silence is the absence of silence: noise or meaningful sound, such as a person speaking.
This condition is enabled by setting the tp_length field of a DV_TPT to a specific period of
time. When non-silence is detected for this length of time, the I/O function will terminate. This
termination condition is frequently used to detect dial tone, or the howler tone that is used by
central offices to indicate that a phone has been off-hook for an extended period of time. The
amount of time can be specified in 100 msec units (default) or 10 msec units. 10 msec units
can be specified in the tp_flags field of the DV_TPT. When this termination condition is met, a
TM_MAXNOSIL termination reason is returned from ATDX_TERMMSK( ).
maximum length of silence (DX_MAXSIL)
This termination condition is enabled by setting the tp_length field of a DV_TPT. The
specified value is the length of time that continuous silence will be detected before it
terminates the I/O function. The amount of time can be specified in 100 msec units (default) or
10 msec units. 10 msec units can be specified in the tp_flags field of the DV_TPT. When this
termination condition is met, a TM_MAXSIL termination reason is returned from
ATDX_TERMMSK( ).
pattern of silence and non-silence (DX_PMON and DX_PMOFF)
This termination condition is not supported on DM3 boards.
A known pattern of silence and non-silence can terminate a function. A pattern can be
specified by using DX_PMON and DX_PMOFF in the tp_termno field in two separate
DV_TPT structures, where one represents a period of silence and one represents a period of
non-silence. When this termination condition is met, a TM_PATTERN termination reason is
returned from ATDX_TERMMSK( ).
DX_PMOFF and DX_PMON termination conditions must be used together. The DX_PMON
terminating condition must directly follow the DX_PMOFF terminating condition. A
combination of both DV_TPT structures using these conditions is used to form a single
termination condition. For more information, see the DV_TPT structure in the Voice API Library Reference.
Voice API Programming Guide — June 200533
Application Development Guidelines
specific digit received (DX_DIGMASK)
Digits received during an I/O function are collected in a channel's digit buffer. If the buffer is
not empty before an I/O function executes, the digits in the buffer are treated as being received
during the I/O execution. This termination condition is enabled by specifying a digit bit mask
in the tp_length field of a DV_TPT structure. If any digit specified in the bit mask appears in
the digit buffer, the I/O function will terminate. When this termination condition is met, a
TM_DIGIT termination reason is returned from ATDX_TERMMSK( ).
On DM3 boards, using more than one DV_TPT structure for detecting different digits is not
supported. Instead, use one DV_TPT structure, set DX_DIGMASK in the tp_termno field, and
bitwise-OR "DM_1 | DM_2" in the tp_length field. For uniformity, it is also strongly
recommended to use the same method to detect different digits on Springware boards.
maximum function time (DX_MAXTIME)
A time limit may be placed on the execution of an I/O function. The tp_length field of a
DV_TPT can be set to a specific length of time in 100 msec units. The I/O function will
terminate when it executes longer than this period of time. The amount of time can be
specified in 100 msec units (default) or 10 msec units. 10 msec units can be specified in the
tp_flags field of the DV_TPT. When this termination condition is met, a TM_MAXTIME
termination reason is returned from ATDX_TERMMSK( ).
On DM3 boards, DX_MAXTIME is not supported by tone generation functions such as
dx_playtone( ) and dx_playtoneEx( ).
user-defined digit received (DX_DIGTYPE)
User-defined digits received during an I/O function are collected in a channel's digit buffer. If
the buffer is not empty before an I/O function executes, the digits in the buffer are treated as
being received during the I/O execution. This termination condition is enabled by specifying
the digit and digit type in the tp_length field of a DV_TPT structure. If any digit specified in
the bit mask appears in the digit buffer, the I/O function will terminate. When this termination
condition is met, a TM_DIGIT termination reason is returned from ATDX_TERMMSK( ).
user-defined tone on/off event detected (DX_TONE)
This termination condition is used with global tone detection. Before specifying a user-defined
tone as a termination condition, the tone must first be defined using the GTD dx_bld...( )
functions, and tone detection on the channel must be enabled using the dx_addtone( ) or
dx_enbtone( ) function. To set tone on/off to be a termination condition, specify DX_TONE in
the tp_termno field of the DV_TPT. You must also specify DX_TONEON or DX_TONEOFF
in the tp_data field. When this termination condition is met, a TM_TONE termination reason
is returned from ATDX_TERMMSK( ).
maximum FSK data received (DX_MAXDATA)
This termination condition is used with ADSI 2-way FSK functions only. It specifies the
maximum data for ADSI 2-way FSK. A transmit/receive FSK session is terminated when the
specified value of DX_MAXDATA (in bytes) is transmitted/received. When this termination
condition is met, a TM_MAXDATA termination reason is returned from
ATDX_TERMMSK( ).
6.1.3Setting Termination Conditions for Digits
To specify a timeout for dx_getdig( ) if the first digit is not received within a specified time period,
use the DX_MAXTIME termination condition in the DV_TPT structure.
34Voice API Programming Guide — June 2005
Application Development Guidelines
To specify an additional timeout if subsequent digits are not received, use the DX_IDDTIME
(interdigit delay) termination condition and the TF_FIRST flag in the DV_TPT structure. The
TF_FIRST flag specifies that the timer will start after the first digit is received; otherwise the timer
starts when the dx_getdig( ) function is called.
6.1.4Clearing Structures Before Use
Two library functions are provided to clear structures. dx_clrcap( ) clears DX_CAP structures and
dx_clrtpt( ) clears DV_TPT structures. See the Voice API Library Reference for details.
It is good practice to clear the field values of any structure before using the structure in a function
call. Doing so will help prevent unintentional settings or terminations.
6.1.5Working with User-Defined I/O Functions
Two library functions are provided to enable you to install user-defined I/O functions (also called
user I/O functions or UIO): dx_setuio( ) and dx_setdevuio( ). For details on these functions, see
the Voice API Library Reference.
The following cautions apply when working with user I/O functions:
• Do not include sleeps, critical sections, or any other delays in the user I/O function.
• Do not call any other Intel Dialogic function inside the user I/O function. One exception is the
ec_getblkinfo( ) function which is called from within a user I/O function. For more
information on this function, see the Continuous Speech Processing API Library Reference.
The reason for these cautions is as follows. On Springware boards, while the user I/O function is
executing, the Standard Runtime Library (SRL) is blocked and cannot process further messages
from the driver. Data will be lost if the driver cannot hand off messages to the SRL. On DM3
boards, you may see chopped audio or underruns. On all boards, be aware that the risk of underruns
increases as density rises.
6.2Fixed and Flexible Routing Configurations
On DM3 boards, the voice library supports two types of routing configuration as follows:
Note: The routing configuration supported for a board depends on the software release in which the board
is used. See the Release Guide for the software release you are using to determine the routing
configuration supported for your board. See also the Configuration Guide for your product family
for information about media load configuration file sets and routing configuration supported.
fixed routing configuration
This configuration is primarily for backward compatibility with System Release 5.0. The fixed
routing configuration applies only to DM3 boards. With fixed routing, the resource devices
(voice/fax) and network interface devices are permanently coupled together in a fixed
configuration. Only the network interface time slot device has access to the TDM bus. Each
voice resource channel device is permanently routed to a corresponding network interface time
slot device on the same physical board. The routing of these resource and network interface
Voice API Programming Guide — June 200535
Application Development Guidelines
devices is predefined and static. The resource device also does not have access to the TDM bus
and so cannot be routed independently on the TDM bus. No off-board sharing or exporting of
voice/fax resources is allowed.
flexible routing configuration
This configuration is compatible with R4 API routing on Springware boards; that is,
Springware boards use flexible routing. Flexible routing is available for DM3 boards starting
in System Release 5.01. With flexible routing, the resource devices (voice/fax) and network
interface devices are independent, which allows exporting and sharing of the resources. All
resources have access to the TDM bus. Each voice resource channel device and each network
interface time slot device can be independently routed on the TDM bus. Flexible routing is the
configuration of choice for application development.
These routing configurations are also referred to as cluster configurations, because the routing
capability is based upon the contents of the DM3 cluster.
The fixed routing configuration is one that uses permanently coupled resources, while the flexible
routing configuration uses independent resources. From a DM3 perspective, the fixed routing
cluster is restricted by its coupled resources and the flexible routing cluster allows more freedom by
nature of its independent resources, as shown in Figure 1.
The routing configuration (fixed or flexible) is determined by the firmware file that you assign to
each DM3 board. The routing configuration takes effect at board initialization.
Figure 1. Cluster Configurations for Fixed and Flexible Routing
Fixed Routing
(Coupled Resources)
VoiceFax
Network
Interface
Network
Interface
TDM bus
Notes:
1.
The R4 Voice Resource includes the DM3 Player,
Recorder, Tone Generator, and Signal Detector resources.
Flexible Routing
(Independent Resources)
Voice
TDM bus
Fax
TDM bus
2.
TDM bus
36Voice API Programming Guide — June 2005
The Fax Resource is an optional component.
3.
The Network Interface is referred to in DM3 terms as the
Telephony Service Channel (TSC).
Application Development Guidelines
6.3Fixed Routing Configuration Restrictions
Flexible routing configuration is the configuration of choice for applications using R4 on DM3.
This documentation assumes that the flexible routing configuration is in use unless otherwise
stated. The following restrictions apply when using fixed routing configuration:
• TDM bus voice resource routing is not supported
• TDM bus fax resource routing restricted
• voice, fax, and Global Call resource/device management restricted
Table 3 shows the voice API function restrictions in a fixed routing configuration. For Fax API
restrictions, see the Fax Software Reference. For Global Call API restrictions, see the Global Call API Programming Guide.
Table 3. API Function Restrictions in a Fixed Routing Configuration
Function NameNotes
dx_close( ) Limitations: Although dx_open( ) and dx_close( ) are operational on DM3 voice
devices in a fixed routing configuration, their purpose is extremely limited by nature of
the voice resource membership in a DM3 cluster. Instead, you must use the
gc_OpenEx( ), gc_GetResourceH( ), and gc_Close( ) functions. See the Global Call
API Library Reference for information on these functions.
dx_getxmitslot( ) Not supported. The function fails with error code EDX_SH_MISSING, indicating
dx_listen( ) Not supported. The function fails with error code EDX_SH_MISSING, indicating
dx_open( ) Limitations: Although dx_open( ) and dx_close( ) are operational on DM3 voice
dx_unlisten( ) Not supported. The function fails with error code EDX_SH_MISSING, indicating
nr_scroute( ) Limitations: Does not support voice, fax, analog network interface devices (LSI), or
nr_scunroute( ) Limitations: Does not support voice, fax, analog network interface devices (LSI), or
“Switching Handler is not present”.
“Switching Handler is not present”.
devices in a fixed routing configuration, their purpose is extremely limited by nature of
the voice resource membership in a DM3 cluster. Instead, you must use the
gc_OpenEx( ), gc_GetResourceH( ), and gc_Close( ) functions. See the Global Call
API Library Reference for information on these functions.
“Switching Handler is not present”.
MSI devices. Supports DTI devices only.
MSI devices. Supports DTI devices only.
6.4Additional DM3 Considerations
The following information provides programming guidelines and considerations for developing
voice applications on DM3 boards:
• Call Control Through Global Call API Library
• Multithreading and Multiprocessing
• DM3 Media Loads
• Device Discovery for DM3 and Springware
Voice API Programming Guide — June 200537
Application Development Guidelines
• Device Initialization Hint
• TDM Bus Time Slot Considerations
• Tone Detection Considerations
6.4.1Call Control Through Global Call API Library
Call state functions such as dx_wink( ) and board-level parameters such as DXBD_R_ON and
DXBD_R_OFF which are used in digital connections do not apply in DM3 applications.
Similarly, hook state functions such as dx_sethook( ) and dx_wtring( ) and settings such as
DM_RINGS which are used in analog connections do not apply in DM3 applications.
Instead, these call control type functions are typically performed by the Global Call API Library.
For more information on setting up call control, see the Global Call API Programming Guide and
the Global Call API Library Reference.
As another example, the DX_LCOFF termination condition is not supported on DM3 boards using
the voice library; however support is available via call control API. For more information, see the
Global Call Analog Technology User’s Guide.
6.4.2Multithreading and Multiprocessing
The voice API supports multithreading and multiprocessing on the board level but not on the
channel level on DM3 boards.
The following restrictions apply:
• A channel can only be opened in one process at a time; the same channel cannot be used by
more than one process concurrently. However, multiple processes can access different sets of
channels. Ensure that each process is provided with a unique set of devices to manipulate.
• If a channel is opened in process A and then closed, process B is allowed to open the same
channel. However, you should avoid this type of sequence. Since closing a channel is an
asynchronous operation on DM3 boards, there is a small gap between the time when the
xx_close( ) function returns in process A and the time when process B is allowed to open the
same channel. If process B opens the channel too early, unpredictable results may occur.
• Multiple processes that define tones (GTD or GTG) do not share tone definitions in the
firmware. For example, if you define tone A in process 1 for channel dxxxB1C1 on a DM3
board and the same tone A in process 2 for channel dxxxB1C1 on the same DM3 board, two
firmware tones are consumed on the board. In other words, the same tone defined from
different processes is not shared in the firmware; hence this limits the number of tones that can
be created overall. For more information, see Chapter 13, “Global Tone Detection and
Generation, and Cadenced Tone Generation”.
It is recommended that you develop your application using a single thread per span or a single
thread per board rather than a single thread per channel. For more information on programming
models and performance considerations, see the Standard Runtime Library API Programming Guide.
38Voice API Programming Guide — June 2005
Application Development Guidelines
6.4.3DM3 Media Loads
Different configurations for DM3 products are supported in the form of media loads. For instance,
a specific media load is available for users who need to implement continuous speech processing
(CSP) and conferencing in their applications. See the appropriate Configuration Guide for specific
media loads that are available.
6.4.4Device Discovery for DM3 and Springware
Applications that use both Springware and DM3 devices must have a way of differentiating what
type of device is to be opened. The TDM bus routing functions such as dx_getctinfo( ) provide a
programming solution. DM3 hardware is identified by the CT_DFDM3 value in the ct_devfamily
field of the CT_DEVINFO structure. Only DM3 devices will have this field set to CT_DFDM3.
For more information on the dx_getctinfo( ) function and the CT_DEVINFO structure, see the
Voice API Library Reference.
Note: Use SRL device mapper functions to return information about the structure of the system. For
information on these functions, see the Standard Runtime Library API Library Reference.
The following procedure shows how to initialize an application and perform device discovery when
the application supports both DM3 and Springware boards.
1. Open the first voice channel device on the first voice board in the system with dx_open( ).
2. Call dx_getctinfo( ) and check the CT_DEVINFO.ct_devfamily value.
3. If ct_devfamily is CT_DFDM3, then flag all the voice channel devices associated with the
board as DM3 type.
4. Close the voice channel with dx_close( ).
5. Repeat steps 1 to 4 for each voice board.
For information on initializing the Global Call API on DM3 devices, see the Global Call API Programming Guide.
6.4.5Device Initialization Hint
The xx_open( ) functions for the voice (dx), Global Call (gc), network (dt), and fax (fx) APIs are
asynchronous on DM3 boards, unlike the standard Springware versions, which are synchronous.
This should usually have no impact on an application, except in cases where a subsequent function
calls on a device that is still initializing, that is, is in the process of opening. In such cases, the
initialization must be finished before the follow-up function can work. The function won’t return
an error, but it is blocked until the device is initialized.
For instance, if your application calls dx_open( ) followed by dx_getfeaturelist( ), the
dx_getfeaturelist( ) function is blocked until the initialization of the device is completed
internally, even though dx_open( ) has already returned success. In other words, the initialization
(dx_open( )) may appear to be complete, but, in truth, it is still going on in parallel.
Voice API Programming Guide — June 200539
Application Development Guidelines
With some applications, this may cause slow device-initialization performance. You can avoid this
problem in one of several ways, depending on the type of application:
• In multithreaded applications, you can reorganize the way the application opens and then
configures devices. The recommendation is to do as many xx_open( ) functions as possible
(grouping the devices) in one thread arranging them in a loop before proceeding with the next
function. For example, you would have one loop through the grouping of devices do all the
xx_open( ) functions first, and then start a second loop through the devices to configure them,
instead of doing one single loop where an xx_open( ) is immediately followed by other API
functions on the same device. With this method, by the time all xx_open( ) commands are
completed, the first channel will be initialized, so you won't experience problems.
This change is not necessary for all applications, but if you experience poor initialization
performance, you can gain back speed by using this hint.
• Develop your application using a single thread per span or a single thread per board. This way,
device initialization can still be done in a loop, and by the time the subsequent function is
called on the first device, initialization on that device has completed.
6.4.6TDM Bus Time Slot Considerations
In a configuration where a network interface device listens to the same TDM bus time slot device
as a local, on board voice device (or other media device such as fax, IP, conferencing, and
continuous speech processing), the sharing of time slot (SOT) algorithm applies. This algorithm
imposes limitations on the order and sequence of “listens” and “unlistens” between network and
media devices. This section gives general guidelines. For details on application development rules
and guidelines regarding SOT, see the technical note posted on the Intel telecom support web site:
Note: These considerations apply to DMV, DM/V-A, DM/IP, and DM/VF boards. They do not apply to
DM/V-B, DI series, and DMV160LP boards.
• If you call a listen function (dt_listen( ) or gc_Listen( )) on a network interface device to
listen to an external TDM bus time slot device, followed by one or more listen functions
(dx_listen( ), ec_listen( ), fx_listen( ), or other related functions), to a local, on-board voice
device in order to listen to the same external TDM bus time slot device, then you must break
(unlisten) the TDM bus voice connection(s) first, using an unlisten function (dx_unlisten( ), ec_unlisten( ), fx_unlisten( ), etc.), prior to breaking the local network interface connection
(dt_unlisten( ) or gc_UnListen( )). Failure to do so will cause the latter call or subsequent
voice calls to fail. This scenario can arise during recording (or transaction recording) of an
external source, during a two-party tromboning (call bridging) connection.
• If more than one local, on-board network interface device is listening to the same external
TDM bus time slot device, the network interface devices must undo the TDM bus connections
(unlisten) in such a way that the first network interface to listen to the TDM bus time slot
device is the last one to unlisten. This scenario can arise during broadcasting of an external
source to several local network interface channels.
These considerations can be avoided by routing media devices before network interface devices,
which forces all time slots to be routed externally; however, density limitations for transaction
record and CSP with external reference signals apply. For more information on how to program
using external reference signals, see the technical notes posted on the Intel telecom support web
site. For transaction record, see
40Voice API Programming Guide — June 2005
http://resource.intel.com/telecom/support/tnotes/gentnote/dl_soft/tn253.htm. For CSP, see
http://resource.intel.com/telecom/support/tnotes/gentnote/dl_soft/tn254.htm.
6.4.7Tone Detection Considerations
The following consideration applies to tone detection on DM3 boards:
• Digits will not always be cleared by the time the dx_clrdigbuf( ) function returns, because
processing may continue on the board even after the function returns. For this reason, careful
consideration should be given when using this function before or during a section where digit
detection or digit termination is required; the digit may be cleared after the function has
returned or possibly during the next function call.
6.5Using Wink Signaling
The information in this section does not apply to DM3 boards.
The following topics provide information on wink signaling which is available through the
dx_wink( ) function:
Application Development Guidelines
• Setting Delay Prior to Wink
• Setting Wink Duration
• Receiving an Inbound Wink
6.5.1Setting Delay Prior to Wink
The information in this section does not apply to DM3 boards.
The default delay prior to generating the outbound wink is 150 msec. To change the delay, use the
dx_setparm( ) function to enter a value for the DXCH_WINKDLY parameter where:
delay = the value entered x 10 msec
The syntax of the function is:
int delay;
delay = 15;
dx_setparm(dev,DXCH_WINKDLY,(void*)&delay)
If delay = 15, then DXCH_WINKDLY = 15 x 10 or 150 msec.
6.5.2Setting Wink Duration
The information in this section does not apply to DM3 boards.
The default outbound wink duration is 150 msec. To change the wink duration, use the
dx_setparm( ) function to enter a value for the DXCH_WINKLEN parameter where:
Voice API Programming Guide — June 200541
Application Development Guidelines
duration = the value entered x 10 msec
The syntax of the function is:
int duration;
duration = 15;
dx_setparm(dev,DXCH_WINKLEN,(void*)&duration)
If duration = 15, then DXCH_WINKLEN = 15 x 10 or 150 msec.
6.5.3Receiving an Inbound Wink
The information in this section does not apply to DM3 boards.
Note: The inbound wink duration must be between the values set for DXCH_MINRWINK and
DXCH_MAXRWINK. The default value for DXCH_MINRWINK is 100 msec, and the default
value for DXCH_MAXRWINK is 200 msec. Use the dx_setparm( ) function to change the
minimum and maximum allowable inbound wink duration.
To receive an inbound wink on a channel:
1. Using the dx_setparm( ) function, set the off-hook delay interval (DXBD_OFFHDLY)
parameter to 1 so that the channel is ready to detect an incoming wink immediately upon going
off hook.
2. Using the dx_setevtmsk( ) function, enable the DM_WINK event.
Note: If DM_WINK is not specified in the mask parameter of the dx_setevtmsk( )
function, and DM_RINGS is specified, a wink will be interpreted as an incoming call.
A typical sequence of events for an inbound wink is:
1. The application calls the dx_sethook( ) function to initiate a call by going off hook.
2. When the incoming call is detected by the Central Office, the CO responds by sending a wink
to the board.
3. When the wink is received successfully, a DE_WINK event is sent to the application.
42Voice API Programming Guide — June 2005
7.Call Progress Analysis
This chapter provides detailed information about the call progress analysis feature. The following
topics are discussed:
Call progress analysis monitors the progress of an outbound call after it is dialed into the Public
Switched Telephone Network (PSTN).
By using call progress analysis (CPA) you can determine for example:
• whether the line is answered and, in many cases, how the line is answered
• whether the line rings but is not answered
• whether the line is busy
• whether there is a problem in completing the call
The outcome of the call is returned to the application when call progress analysis has completed.
Voice API Programming Guide — June 200543
Call Progress Analysis
There are two forms of call progress analysis:
PerfectCall call progress analysis
Also called enhanced call progress analysis. Uses an improved method of signal identification
and can detect fax machines and answering machines. You should design all new applications
using PerfectCall call progress analysis. DM3 boards support PerfectCall call progress
analysis only.
Note: In this document, the term call progress analysis refers to PerfectCall call progress
analysis unless stated otherwise.
Basic call progress analysis
Provides backward compatibility for older applications written before PerfectCall call progress
analysis became available. It is strongly recommended that you do not design new applications
using basic call progress analysis.
Caution: If your application also uses the Global Call API, see the Global Call documentation set for call
progress analysis considerations specific to Global Call. The Global Call API is a common
signaling interface for network-enabled applications, regardless of the signaling protocol needed to
connect to the local telephone network. Call progress analysis support varies with the protocol
used.
7.2Call Progress and Call Analysis Terminology
On DM3 boards, a distinction is made between activity that occurs before a call is connected and
after a call is connected. The following terms are used:
call progress (pre-connect)
This term refers to activity to determine the status of a call connection, such as busy, no
ringback, no dial tone, and can also include the frequency detection of Special Information
Tones (SIT), such as operator intercept. This activity occurs before a call is connected.
call analysis (post-connect)
This term refers to activity to determine the destination party’s media type, such as voice
detection, answering machine detection, fax tone detection, modem, and so on. This activity
occurs after a call is connected.
call progress analysis
This term refers to the feature set that encompasses both call progress and call analysis.
7.3Call Progress Analysis Components
Call progress analysis uses the following techniques or components to determine the progress of a
call as applicable:
• cadence detection (pre-connect part of call progress analysis)
• frequency detection (pre-connect part of call progress analysis)
• loop current detection (pre-connect part of call progress analysis)
• positive voice detection (post-connect part of call progress analysis)
44Voice API Programming Guide — June 2005
• positive answering machine detection (post-connect part of call progress analysis)
• fax tone detection (post-connect part of call progress analysis)
Figure 2 illustrates the components of basic call progress analysis. Figure 3 illustrates the
components of PerfectCall call progress analysis. These components can all operate
simultaneously.
In basic call progress analysis, cadence detection is the sole means of detecting a no ringback, busy,
or no answer. PerfectCall call progress analysis uses cadence detection plus frequency detection to
identify all of these signals plus fax machine tones. A connect can be detected through the
complementary methods of cadence detection, frequency detection, loop current detection, positive
voice detection, and positive answering machine detection.
The following topics provide information on how to use call progress analysis on DM3 boards:
• Call Progress Analysis Rules on DM3 Boards
• Overview of Steps to Initiate Call Progress Analysis
• Setting Up Call Progress Analysis Parameters in DX_CAP
• Executing a Dial Function
• Determining the Outcome of a Call
• Obtaining Additional Call Outcome Information
7.4.1Call Progress Analysis Rules on DM3 Boards
The following rules apply to the use of call progress analysis on DM3 boards:
• It is recommended that all applications use the Global Call API for call progress analysis on
DM3 boards. For more information, see the Global Call API Programming Guide. However,
for backward compatibility, applications that use ISDN protocols can still enable call progress
analysis using dx_dial( ).
• If you choose to use dx_dial( ) in ISDN applications, do not mix the use of the Global Call
API and the Voice API within a phase of call progress analysis (pre-connect or post-connect).
• If you use channel associated signaling (CAS) or analog protocols, the following rules apply:
– Pre-connect is typically provided by the protocol via the Global Call API.
– The dx_dial( ) function cannot be used for pre-connect.
– If post-connect is disabled in the protocol, then dx_dial( ) is available for post-connect.
46Voice API Programming Guide — June 2005
Call Progress Analysis
Table 4 provides information on call progress analysis scenarios supported with the dx_dial( )
function. This method is available regardless of the protocol being used; however, some restrictions
apply when using DM3 CAS protocols. The restrictions are due to the fact that the voice capability
is shared between the network device and the voice channel during the call setup time. In particular,
to invoke dx_dial( ) under channel associated signaling (CAS), your application must wait for the
connected event.
Note: The information in this table also applies to DM3 analog products, which are considered to use
CAS protocols.
Table 4. Call Progress Analysis Support with dx_dial( )
CPA Feature
BusyYesanalog/CAS protocols: not supported
No ringbackYesanalog/CAS protocols: not supported
SIT frequency detectionYesanalog/CAS protocols: not supported
No answerYesanalog/CAS protocols: not supported
Cadence breakYesanalog/CAS protocols: not supported
Loop current detectionNo
Dial tone detectionNo
Fax tone detectionYesanalog/CAS protocols: wait for Global Call
Positive Voice Detection (PVD)Yesanalog/CAS protocols: wait for Global Call
Positive Answering Machine
Detection (PAMD)
dx_dial( )
support on
DM3
GCEV_CONNECTED event
GCEV_CONNECTED event
Yesanalog/CAS protocols: wait for Global Call
GCEV_CONNECTED event
Comments
7.4.2Overview of Steps to Initiate Call Progress Analysis
Review the information in Section 7.4.1, “Call Progress Analysis Rules on DM3 Boards”, on
page 46. If you choose to use the voice API for call progress analysis on DM3 boards, perform the
following procedure to initiate an outbound call with call progress analysis:
1. Set up the call analysis parameter structure (DX_CAP), which contains parameters to control
the operation of call progress analysis, such as positive voice detection and positive answering
machine detection.
2. Call dx_dial( ) to start call progress analysis during the desired phase of the call.
3. Use the ATDX_CPTERM( ) extended attribute function to determine the outcome of the call.
4. Obtain additional termination information as desired using extended attribute functions.
Each of these steps is described in more detail next. For a full description of the functions and data
structures described in this chapter, see the Voice API Library Reference.
Voice API Programming Guide — June 200547
Call Progress Analysis
7.4.3Setting Up Call Progress Analysis Parameters in DX_CAP
The call progress analysis parameters structure, DX_CAP, is used by dx_dial( ). It contains
parameters to control the operation of call progress analysis features, such as positive voice
detection (PVD) and positive answering machine detection (PAMD). To customize the parameters
for your environment, you must set up the DX_CAP structure before calling a dial function.
To set up the DX_CAP structure for call progress analysis:
1. Execute the dx_clrcap( ) function to clear the DX_CAP and initialize the parameters to 0. The
value 0 indicates that the default value will be used for that particular parameter. dx_dial( ) can
also be set to run with default call progress analysis parameter values, by specifying a NULL
pointer to the DX_CAP structure.
2. Set a DX_CAP parameter to another value if you do not want to use the default value. The
ca_intflg field (intercept mode flag) of DX_CAP enables and disables the following call
progress analysis components: SIT frequency detection, positive voice detection (PVD), and
positive answering machine detection (PAMD). Use one of the following values for the
ca_intflg field:
• DX_OPTDIS. Disables Special Information Tone (SIT) frequency detection, PAMD, and
PVD. This setting provides call progress without SIT frequency detection.
• DX_OPTNOCON. Enables SIT frequency detection and returns an “intercept”
immediately after detecting a valid frequency. This setting provides call progress with SIT
frequency detection.
• DX_PVDENABLE. Enables PVD and fax tone detection. Provides PVD call analysis
only (no call progress).
• DX_PVDOPTNOCON. Enables PVD, DX_OPTNOCON, and fax tone detection. This
setting provides call progress with SIT frequency detection and PVD call analysis.
• DX_PAMDENABLE. Enables PAMD, PVD, and fax tone detection. This setting provides
PAMD and PVD call analysis only (no call progress).
• DX_PAMDOPTEN. Enables PAMD, PVD, DX_OPTNOCON, and fax tone detection.
This setting provides full call progress and call analysis.
Note: DX_OPTEN and DX_PVDOPTEN are obsolete. Use DX_OPTNOCON and
DX_PVDOPTNOCON instead.
7.4.4Executing a Dial Function
To use call progress analysis, call dx_dial( ) with the mode function argument set to DX_CALLP.
Termination of dialing with call progress analysis is indicated differently depending on whether the
function is running asynchronously or synchronously.
If running asynchronously, use Standard Runtime Library (SRL) event management functions to
determine when dialing with call progress analysis is complete (TDX_CALLP termination event).
If running synchronously, wait for the function to return a value greater than 0 to indicate
successful completion.
48Voice API Programming Guide — June 2005
Call Progress Analysis
Notes: 1. On DM3 boards, dx_dial( ) cannot be used to start an outbound call; instead use the Global Call
API.
2. To issue dx_dial( ) without dialing digits, specify “ ” in the dialstrp argument.
7.4.5Determining the Outcome of a Call
In asynchronous mode, once dx_dial( ) with call progress analysis has terminated, use the
extended attribute function ATDX_CPTERM( ) to determine the outcome of the call. (In
synchronous mode, dx_dial( ) returns the outcome of the call.) ATDX_CPTERM( ) will return
one of the following call progress analysis termination results:
CR_BUSY
Called line was busy.
CR_CEPT
Called line received operator intercept (SIT).
CR_CNCT
Called line was connected. Use ATDX_CONNTYPE( ) to return the connection type for a
completed call.
CR_ERROR
Call progress analysis error occurred. Use ATDX_CPERROR( ) to return the type of error.
CR_FAXTONE
Called line was answered by fax machine or modem.
CR_NOANS
Called line did not answer.
CR_NORB
No ringback on called line.
CR_STOPD
Call progress analysis stopped due to dx_stopch( ).
Figure 4 illustrates the possible outcomes of call progress analysis.
Voice API Programming Guide — June 200549
Call Progress Analysis
Figure 4. Call Outcomes for Call Progress Analysis (DM3)
Incoming
Signal
No
Positive
Voice or
Answering
Machine
Detection
Connect
CR_CNCT
Te r mi nation
Reason
Frequency
Detection
Intercept
(SIT)
CR_CEPT
Termination Reason: From ATDX_CPTERM( ).
Connect Reason: From ATDX_CONNTYPE( ).
Faxtone
CR_FAXTONE
Cadence
Detection
Busy
CR_BUSY
Connect
Reason
No
Answer
CR_NOANS
Ringback
CR_NORB
CON_CAD
CON_PVD
CON_PAMD
7.4.6Obtaining Additional Call Outcome Information
To obtain additional call progress analysis information, use the following extended attribute
functions:
ATDX_CPERROR( )
Returns call analysis error.
ATDX_CPTERM( )
Returns last call analysis termination reason.
ATDX_CONNTYPE( )
Returns connection type.
See each function reference description in the Voice API Library Reference for more information.
Note: These extended attribute functions do not return information about functionality that is not
supported on DM3 boards; for example, connection type CON_LPC and termination reason
CR_NODIALTONE.
50Voice API Programming Guide — June 2005
Call Progress Analysis
7.5Call Progress Analysis Tone Detection on DM3
Boards
The following topics discuss tone detection used in call progress analysis on DM3 boards:
• Tone Detection Overview
• Types of Tones
• Ringback Detection
• Busy Tone Detection
• Fax or Modem Tone Detection
• SIT Frequency Detection
7.5.1Tone Detection Overview
Call progress analysis uses a combination of cadence detection and frequency detection to identify
certain signals during the course of an outgoing call. Cadence detection identifies repeating
patterns of sound and silence, and frequency detection determines the pitch of the signal. Together,
the cadence and frequency of a signal make up its “tone definition”.
7.5.2Types of Tones
Tone definitions are used to identify several kinds of signals.
The following defined tones and tone identifiers are provided by the voice library for DM3 boards.
Tone identifiers are returned by the ATDX_CRTNID( ) function.
TID_BUSY1
First signal busy
TID_BUSY2
Second signal busy
TID_DIAL_INTL
International dial tone
TID_DIAL_LCL
Local dial tone
TID_DISCONNECT
Disconnect tone (post-connect)
TID_FAX1
First fax or modem tone
TID_FAX2
Second fax or modem tone
TID_RNGBK1
Ringback (detected as single tone)
Voice API Programming Guide — June 200551
Call Progress Analysis
TID_RNGBK2
Ringback (detected as dual tone)
TID_SIT_ANY
Catch all (returned for a Special Information Tone sequence or SIT sequence that falls outside
the range of known default SIT sequences)
TID_SIT_INEFFECTIVE_OTHER or
TID_SIT_IO
Ineffective other SIT sequence
TID_SIT_NO_CIRCUIT or
TID_SIT_NC
No circuit found SIT sequence
TID_SIT_NO_CIRCUIT_INTERLATA or
TID_SIT_NC_INTERLATA
InterLATA no circuit found SIT sequence
TID_SIT_OPERATOR_INTERCEPT or
TID_SIT_IC
Operator intercept SIT sequence
TID_SIT_REORDER_TONE or
TID_SIT_RO
Reorder (system busy) SIT sequence
TID_SIT_REORDER_TONE_INTERLATA or
TID_SIT_RO_INTERLATA
InterLATA reorder (system busy) SIT sequence
TID_SIT_VACANT_CIRCUIT or
TID_SIT_VC
Vacant circuit SIT sequence
Some of these tone identifiers may be used as input to function calls to change the tone definitions.
For more information, see Section 7.8, “Modifying Default Call Progress Analysis Tone
Definitions on DM3 Boards”, on page 57.
7.5.3Ringback Detection
Call progress analysis uses the tone definition for ringback to identify the first ringback signal of an
outgoing call. At the end of the first ringback (that is, normally, at the beginning of the second
ringback), a timer goes into effect. The system continues to identify ringback signals (but does not
count them). If a break occurs in the ringback cadence, the call is assumed to have been answered,
and call progress analysis terminates with the reason CR_CNCT (connect); the connection type
returned by the ATDX_CONNTYPE( ) function will be CON_CAD (cadence break).
However, if the timer expires before a connect is detected, then the call is deemed unanswered, and
call progress analysis terminates with the reason CR_NOANS.
52Voice API Programming Guide — June 2005
To enable ringback detection, turn on SIT frequency detection in the DX_CAP ca_intflg field. For
details, see Section 7.4.3, “Setting Up Call Progress Analysis Parameters in DX_CAP”, on
page 48.
The following DX_CAP fields govern ringback behavior on DM3 boards:
ca_cnosig
Continuous No Signal: the maximum length of silence (no signal) allowed immediately after
the ca_stdely period (in 10 msec units). If this duration is exceeded, call progress analysis is
terminated with the reason CR_NORB (no ringback detected). Default value: 4000 (40
seconds).
ca_noanswer
No Answer: the length of time to wait after the first ringback before deciding that the call is
not answered (in 10 msec units). If this duration is exceeded, call progress analysis is
terminated with the reason CR_NOANS (no answer). Default value: 3000 (30 seconds).
7.5.4Busy Tone Detection
Call progress analysis specifies two busy tones: TID_BUSY1 and TID_BUSY2. If either of them is
detected while frequency detection and cadence detection are active, then call progress is
terminated with the reason CR_BUSY. ATDX_CRTNID( ) identifies which busy tone was
detected.
Call Progress Analysis
To enable busy tone detection, turn on SIT frequency detection in the DX_CAP ca_intflg field. For
details, see Section 7.4.3, “Setting Up Call Progress Analysis Parameters in DX_CAP”, on
page 48.
7.5.5Fax or Modem Tone Detection
Call progress analysis specifies two tones: TID_FAX1 and TID_FAX2. If either of these tones is
detected while frequency detection and cadence detection are active, then call progress is
terminated with the reason CR_FAXTONE. ATDX_CRTNID( ) identifies which fax or modem
tone was detected.
To enable fax or modem tone detection, use the ca_intflg field of the DX_CAP structure. For
details, see Section 7.4.3, “Setting Up Call Progress Analysis Parameters in DX_CAP”, on
page 48.
7.5.6SIT Frequency Detection
Special Information Tone (SIT) frequency detection is a component of call progress analysis. On
DM3 boards, SIT sequences are defined as standard tone IDs.
To enable SIT frequency detection, use the ca_intflg field of the DX_CAP structure. For more
information, see Section 7.4.3, “Setting Up Call Progress Analysis Parameters in DX_CAP”, on
page 48.
Voice API Programming Guide — June 200553
Call Progress Analysis
Table 5 shows default tone definitions for SIT sequences used on DM3 boards. The values in the
“Freq.” column represent minimum and maximum values in Hz. “Time” refers to minimum and
maximum on time in 10 msec units; the maximum off time between each segment is 5 (or 50
msec). The repeat count is 1 for all SIT segments. N/A means “not applicable.”
The following considerations apply to SIT sequences on DM3 boards:
• A single tone proxy for the dual tone (also called twin tone) exists for each of the three
segments in a SIT sequence. The default definition for the minimum value and maximum
value (in Hz) is 0. For more information on this tone, see Section 7.8.4, “Rules for Using a
• Default SIT definitions can be modified, except for the following SIT sequences: TID_SIT_
ANY, TID_SIT_IO, TID_SIT_NC_INTERLATA, and TID_SIT_RO_INTERLATA. For more
information, see Section 7.8, “Modifying Default Call Progress Analysis Tone Definitions on
DM3 Boards”, on page 57.
• For TID_SIT_ANY, the frequency and time of the first and second segments are open; that is,
they are ignored. Only the frequency of the third segment is relevant. This catch-all SIT
sequence definition is intended to cover SIT sequences that fall outside the range of the
defined SIT sequences.
54Voice API Programming Guide — June 2005
7.6Media Tone Detection on DM3 Boards
Media tone detection in call progress analysis is discussed in the following topics:
• Positive Voice Detection (PVD)
• Positive Answering Machine Detection (PAMD)
7.6.1Positive Voice Detection (PVD)
Positive voice detection (PVD) can detect when a call has been answered by determining whether
an audio signal is present that has the characteristics of a live or recorded human voice. This
provides a very precise method for identifying when a connect occurs.
The ca_intflg field in DX_CAP enables/disables PVD. For information on enabling PVD, see
Section 7.4.3, “Setting Up Call Progress Analysis Parameters in DX_CAP”, on page 48.
PVD is especially useful in those situations where no other method of answer supervision is
available, and where the cadence is not clearly broken for cadence detection to identify a connect
(for example, when the nonsilence of the cadence is immediately followed by the nonsilence of
speech).
Call Progress Analysis
If the ATDX_CONNTYPE( ) function returns CON_PVD, the connect was due to positive voice
detection.
7.6.2Positive Answering Machine Detection (PAMD)
Whenever PAMD is enabled, positive voice detection (PVD) is also enabled.
The ca_intflg field in DX_CAP enables/disables PAMD and PVD. For information on enabling
PAMD, see Section 7.4.3, “Setting Up Call Progress Analysis Parameters in DX_CAP”, on
page 48.
When enabled, detection of an answering machine will result in the termination of call analysis
with the reason CR_CNCT (connected); the connection type returned by the
ATDX_CONNTYPE( ) function will be CON_PAMD.
The following DX_CAP fields govern positive answering machine detection:
ca_pamd_spdval
PAMD Speed Value: To distinguish between a greeting by a live human and one by an
answering machine, use one of the following settings:
• PAMD_FULL – look at the greeting (long method). The long method looks at the full
greeting to determine whether it came from a human or a machine. Using PAMD_FULL
gives a very accurate determination; however, in situations where a fast decision is more
important than accuracy, PAMD_QUICK might be preferred.
• PAMD_QUICK – look at connect only (quick method). The quick method examines only
the events surrounding the connect time and makes a rapid judgment as to whether or not
an answering machine is involved.
• PAMD_ACCU – look at the greeting (long method) and use the most accuracy for
detecting an answering machine. This setting provides the most accurate evaluation. It
Voice API Programming Guide — June 200555
Call Progress Analysis
detects live voice as accurately as PAMD_FULL but is more accurate than PAMD_FULL
(although slightly slower) in detecting an answering machine. Use the setting
PAMD_ACCU when accuracy is more important than speed.
Default value (DM3 boards): PAMD_ACCU
The recommended setting for the call analysis parameter structure (DX_CAP)
ca_pamd_spdval field is PAMD_ACCU.
ca_pamd_failtime
maximum time to wait for positive answering machine detection or positive voice detection
after a cadence break. Default Value: 400 (in 10 msec units).
7.7Default Call Progress Analysis Tone Definitions on
DM3 Boards
Table 6 provides the range of values for default tone definitions for DM3 boards. These default
tone definitions are used in call progress analysis. Amplitudes are given in dBm, frequencies in Hz,
and duration in 10 msec units. A dash in a table cell means not applicable.
Notes: 1. On DM3 boards, voice API functions are provided to manipulate the tone definitions in this table
(see Section 7.8, “Modifying Default Call Progress Analysis Tone Definitions on DM3 Boards”,
on page 57). However, not all the functionality provided by these tones is available through the
voice API. You may need to use the Global Call API to access the functionality, for example, in
the case of dial tone detection and disconnect tone detection.
2. An On Time maximum value of 0 indicates that this is a continuous tone. For example,
TID_DIAL_LCL has an On Time range of 10 to 0. This means that the tone is on for 100 msecs.
The minimum requirement for detecting a tone is that it must be continuous for at least 100
msecs (10 in 10 msec units) after it is detected.
3. A single tone proxy for a dual tone (twin tone) can help improve the accuracy of dual tone
detection in some cases. For more information, see Section 7.8.4, “Rules for Using a Single Tone
Proxy for a Dual Tone”, on page 59.
56Voice API Programming Guide — June 2005
Table 6. Default Call Progress Analysis Tone Definitions (DM3)
7.8Modifying Default Call Progress Analysis Tone
Definitions on DM3 Boards
On DM3 boards, call progress analysis tones are maintained in the firmware on a physical board
level and are board-specific. More information on tone definitions is provided in the following
topics:
Twi n Ton e
Freq (Hz)
• API Functions for Manipulating Tone Definitions
• TONE_DATA Data Structure
• Rules for Modifying a Tone Definition on DM3 Boards
• Rules for Using a Single Tone Proxy for a Dual Tone
• Steps to Modify a Tone Definition on DM3 Boards
7.8.1API Functions for Manipulating Tone Definitions
The following voice API functions are used to manipulate the default tone definitions shown in
Table 6, “Default Call Progress Analysis Tone Definitions (DM3)”, on page 57 and some, but not
all, of the default tone definitions shown in Table 5, “Special Information Tone Sequences (DM3)”,
on page 54.
Note: Default SIT definitions can be modified, except for the following SIT sequences: TID_SIT_ ANY,
TID_SIT_IO, TID_SIT_NC_INTERLATA, and TID_SIT_RO_INTERLATA.
dx_querytone( )
gets tone information for a specific call progress tone
Voice API Programming Guide — June 200557
Call Progress Analysis
dx_deletetone( )
deletes a specific call progress tone
dx_createtone( )
creates a new tone definition for a specific call progress tone
7.8.2TONE_DATA Data Structure
The TONE_DATA structure contains tone information for a specific call progress tone. This
structure contains a nested array of TONE_SEG substructures. A maximum of six TONE_SEG
substructures can be specified. The TONE_DATA structure specifies the following key
information:
TONE_SEG.structver
Specifies the version of the TONE_SEG structure. Used to ensure that an application is binary
compatible with future changes to this data structure.
TONE_SEG.tn_dflag
Specifies whether the tone is dual tone or single tone. Values are 1 for dual tone and 0 for
single tone.
TONE_SEG.tn1_min
Specifies the minimum frequency in Hz for tone 1.
TONE_SEG.tn1_max
Specifies the maximum frequency in Hz for tone 1.
TONE_SEG.tn2_min
Specifies the minimum frequency in Hz for tone 2.
TONE_SEG.tn2_max
Specifies the maximum frequency in Hz for tone 2.
TONE_SEG.tn_twinmin
Specifies the minimum frequency in Hz of the single tone proxy for the dual tone.
TONE_SEG.tn_twinmax
Specifies the maximum frequency in Hz of the single tone proxy for the dual tone.
TONE_SEG.tnon_min
Specifies the debounce minimum ON time in 10 msec units.
TONE_SEG.tnon_max
Specifies the debounce maximum ON time in 10 msec units.
TONE_SEG.tnoff_min
Specifies the debounce minimum OFF time in 10 msec units.
TONE_SEG.tnoff_max
Specifies the debounce maximum OFF time in 10 msec units.
TONE_DATA.structver
Specifies the version of the TONE_DATA structure. Used to ensure that an application is
binary compatible with future changes to this data structure.
TONE_DATA.tn_rep_cnt
Specifies the debounce rep count.
58Voice API Programming Guide — June 2005
Call Progress Analysis
TONE_DATA.numofseg
Specifies the number of segments for a multi-segment tone.
7.8.3Rules for Modifying a Tone Definition on DM3 Boards
Consider the following rules and guidelines for modifying default tone definitions on DM3 boards
using the voice API library:
• Yo u mu st is su e dx_querytone( ), dx_deletetone( ), and dx_createtone( ) in this order, one
tone at a time, for each tone definition to be modified.
• Attempting to create a new tone definition before deleting the current call progress tone will
result in an EDX_TNQUERYDELETE error.
• When dx_querytone( ), dx_deletetone( ), or dx_createtone( ) is issued in asynchronous
mode and is immediately followed by another similar call prior to completion of the previous
call on the same device, the subsequent call will fail with device busy.
• Only default call progress analysis tones and SIT sequences are supported for these three
functions. For a list of these tones, see Table 5, “Special Information Tone Sequences (DM3)”,
on page 54 and Table 6, “Default Call Progress Analysis Tone Definitions (DM3)”, on
page 57.
• These three voice API functions are provided to manipulate the call progress analysis tone
definitions. However, not all the functionality provided by these tones is available through the
voice API. You may need to use the Global Call API to access the functionality, for example,
in the case of dial tone detection and disconnect tone detection.
• If the application deletes all the default call progress analysis tones in a particular set (where a
set is defined as busy tones, dial tones, ringback tones, fax tones, disconnect tone, and special
information tones), the set itself is deleted from the board and call progress analysis cannot be
performed successfully. Therefore, you must have at least one tone defined in each tone set in
order for call progress analysis to perform successfully.
Note: The Learn Mode API and Tone Set File (TSF) API provide a more comprehensive way to manage
call progress tones, in particular the unique call progress tones produced by PBXs, key systems,
and PSTNs. Applications can learn tone characteristics using the Learn Mode API. Information on
several different tones forms one tone set. Tone sets can be written to a tone set file using the Tone
Set File API. For more information, see the Learn Mode and Tone Set File API Software Reference.
7.8.4Rules for Using a Single Tone Proxy for a Dual Tone
A single tone proxy (also called a twin tone) acts as a proxy for a dual tone. A single tone proxy
can be defined when you run into difficulty detecting a dual tone. This situation can arise when the
two frequencies of the dual tone are close together, are very short tones, or are even multiples of
each other. In these cases, the dual tone might be detected as a single tone. A single tone proxy can
help improve the detection of the dual tone by providing an additional tone definition.
The TONE_SEG.tn_twinmin field defines the minimum frequency of the tone and
TONE_SEG.tn_twinmax field defines the maximum frequency of the tone.
Voice API Programming Guide — June 200559
Call Progress Analysis
Consider the following guidelines when creating a single tone proxy:
• It is recommended that you add at least 60 Hz to the top of the dual tone range and subtract at
least 60 Hz from the bottom of the dual tone range. For example:
Freq1 (Hz): 400 - 500
Freq 2 (Hz): 600 - 700
Twin tone freq (Hz): 340 - 760
• Before using the TONE_DATA structure in a function call, set any unused fields in the
structure to zero to prevent possible corruption of data in the allocated memory space. This
guideline is applicable to unused fields in any data structure.
7.8.5Steps to Modify a Tone Definition on DM3 Boards
To modify a default tone definition on DM3 boards using the voice API library, follow these steps:
Note: This procedure assumes that you have already opened the physical board device handle in your
application. To get the physical board name in the form brdBn, use the
SRLGetPhysicalBoardName( ) function. This function and other device mapper functions return
information about the structure of the system. For more information, see the Standard Runtime Library API Library Reference.
1. Get the tone information for the call progress tone to be modified using dx_querytone( ).
After the function completes successfully, the relevant tone information is contained in the
TONE_DATA structure.
2. Delete the current call progress tone using dx_deletetone( ) before creating a new tone
definition.
3. Create a new tone definition for the call progress tone using dx_createtone( ). Specify the new
tone information in the TONE_DATA structure.
4. Repeat steps 1-3 in this order for each tone to be modified.
7.9Call Progress Analysis Errors
If ATDX_CPTERM( ) returns CR_ERROR, you can use ATDX_CPERROR( ) to determine the
call progress analysis error that occurred. For details on these functions, see the Voice API Library Reference.
7.10Using Call Progress Analysis on Springware Boards
The following topics provide information on how to use call progress analysis when making an
outbound call:
• Overview of Steps to Initiate Call Progress Analysis
• Setting Up Call Progress Analysis Features in DX_CAP
• Enabling Call Progress Analysis
• Executing a Dial Function
60Voice API Programming Guide — June 2005
Call Progress Analysis
• Determining the Outcome of a Call
• Obtaining Additional Call Outcome Information
7.10.1Overview of Steps to Initiate Call Progress Analysis
Perform the following procedure to initiate an outbound call with call progress analysis:
1. Set up the call analysis parameter structure (DX_CAP), which contains parameters to control
the operation of call progress analysis, such as frequency detection, cadence detection, loop
current, positive voice detection, and positive answering machine detection.
2. On Springware boards, enable call progress analysis on a specified channel using
dx_initcallp( ). Modify tone definitions as appropriate.
3. Call dx_dial( ) to start an outbound call.
4. Use the ATDX_CPTERM( ) extended attribute function to determine the outcome of the call.
5. Obtain additional termination, frequency, or cadence information (such as the length of the
salutation) as desired using extended attribute functions.
Each of these steps is described in more detail next. For a full description of the functions and data
structures described in this chapter, see the Voice API Library Reference.
7.10.2Setting Up Call Progress Analysis Features in DX_CAP
The call progress analysis parameters structure, DX_CAP, is used by dx_dial( ). It contains
parameters to control the operation of call progress analysis features, such as frequency detection,
positive voice detection (PVD), and positive answering machine detection (PAMD).
To customize the parameters for your environment, you must set up the call progress analysis
parameter structure before calling a dial function.
To set up the DX_CAP structure for call progress analysis:
1. Execute the dx_clrcap( ) function to clear the DX_CAP and initialize the parameters to 0. The
value 0 indicates that the default value will be used for that particular parameter. dx_dial( ) can
also be set to run with default call progress analysis parameter values, by specifying a NULL
pointer to the DX_CAP structure.
2. Set a DX_CAP parameter to another value if you do not want to use the default value. The
ca_intflg field (intercept mode flag) of DX_CAP enables and disables the following call
progress analysis components: SIT frequency detection, positive voice detection (PVD), and
positive answering machine detection (PAMD). Use one of the following values for the
ca_intflg field:
• DX_OPTDIS. Disables Special Information Tone (SIT) frequency detection, PAMD, and
PVD.
• DX_OPTNOCON. Enables SIT frequency detection and returns an “intercept”
immediately after detecting a valid frequency.
• DX_PVDENABLE. Enables PVD and fax tone detection.
• DX_PVDOPTNOCON. Enables PVD, DX_OPTNOCON, and fax tone detection.
Voice API Programming Guide — June 200561
Call Progress Analysis
• DX_PAMDENABLE. Enables PAMD, PVD, and fax tone detection.
• DX_PAMDOPTEN. Enables PAMD, PVD, DX_OPTNOCON, and fax tone detection.
Note: DX_OPTEN and DX_PVDOPTEN are obsolete. Use DX_OPTNOCON and
DX_PVDOPTNOCON instead.
For more information on adjusting DX_CAP parameters, see Section 7.11, “Call Progress Analysis
Tone Detection on Springware Boards”, on page 65, Section 7.12, “Media Tone Detection on
Springware Boards”, on page 69, and Section 7.15, “SIT Frequency Detection (Springware Only)”,
on page 72.
7.10.3Enabling Call Progress Analysis
Call progress analysis is activated on a per-channel basis. On Springware boards, initiate call
progress analysis using the dx_initcallp( ) function.
On Springware boards, to enable call progress analysis on a specified channel, perform the
following steps. This procedure needs to be followed only once per channel; thereafter, any
outgoing calls made using a dial function will benefit from call progress analysis.
1. Make any desired modifications to the default dial tone, busy tone, fax tone, and ringback
signal definitions using the dx_chgfreq( ), dx_chgdur( ), and dx_chgrepcnt( ) functions. For
more information, see Section 7.14, “Modifying Default Call Progress Analysis Tone
Definitions on Springware Boards”, on page 71.
2. Call dx_deltones( ) to clear all tone templates remaining on the channel. Note that this
function deletes all global tone definition (GTD) tones for the given channel, and not just those
involved with call progress analysis.
3. Execute the dx_initcallp( ) function to activate call progress analysis. Call progress analysis
stays active until dx_deltones( ) is called.
The dx_initcallp( ) function initializes call progress analysis on the specified channel using the
current tone definitions. Once the channel is initialized with these tone definitions, this
initialization cannot be altered. The only way to change the tone definitions in effect for a given
channel is to issue a dx_deltones( ) call for that channel, then invoke another dx_initcallp( ) with
different tone definitions.
7.10.4Executing a Dial Function
To use call progress analysis, call dx_dial( ) with the mode function argument set to DX_CALLP.
Termination of dialing with call progress analysis is indicated differently depending on whether the
function is running asynchronously or synchronously.
If running asynchronously, use Standard Runtime Library (SRL) Event Management functions to
determine when dialing with call progress analysis is complete (TDX_CALLP termination event).
If running synchronously, wait for the function to return a value greater than 0 to indicate
successful completion.
62Voice API Programming Guide — June 2005
7.10.5Determining the Outcome of a Call
In asynchronous mode, once dx_dial( ) with call progress analysis has terminated, use the
extended attribute function ATDX_CPTERM( ) to determine the outcome of the call. (In
synchronous mode, dx_dial( ) returns the outcome of the call.) ATDX_CPTERM( ) will return
one of the following call progress analysis termination results:
CR_BUSY
Called line was busy.
CR_CEPT
Called line received operator intercept (SIT). Extended attribute functions provide information
on detected frequencies and duration.
CR_CNCT
Called line was connected. Use ATDX_CONNTYPE( ) to return the connection type for a
completed call.
CR_ERROR
Call progress analysis error occurred. Use ATDX_CPERROR( ) to return the type of error.
CR_FAXTONE
Called line was answered by fax machine or modem.
Call Progress Analysis
CR_NOANS
Called line did not answer.
CR_NODIALTONE
Timeout occurred while waiting for dial tone.
CR_NORB
No ringback on called line.
CR_STOPD
Call progress analysis stopped due to dx_stopch( ).
Figure 5 illustrates the possible outcomes of call progress analysis on Springware boards.
Voice API Programming Guide — June 200563
Call Progress Analysis
Figure 5. Call Outcomes for Call Progress Analysis (Springware)
Incoming
Signal
No
Positive
Voice or
Answering
Machine
Detection
Connect
CR_CNCT
Termination
Reason
Frequency
Detection
Intercept
(SIT)
CR_CEPT
Termination Reason: From ATDX_CPTERM( ).
Connect Reason: From ATDX_CONNTYPE( ).
No
Dialtone
CR_NO-
DIALTONE
Cadence
Detection
Faxtone
CR_FAXTONE
Busy
CR_BUSY
Loop
Current
Detection
No
Answer
CR_NOANS
Connect
Reason
Ringback
CR_NORB
CR_CAD
CON_LPC
CON_PVD
CON_PAMD
7.10.6Obtaining Additional Call Outcome Information
To obtain additional call progress analysis information, use the following extended attribute
functions:
ATDX_ANSRSIZ( )
Returns duration of answer.
ATDX_CPERROR( )
Returns call analysis error.
ATDX_CPTERM( )
Returns last call analysis termination.
ATDX_CONNTYPE( )
Returns connection type
ATDX_CRTNID( )
Returns the identifier of the tone that caused the most recent call progress analysis termination.
ATDX_DTNFAIL( )
Returns the dial tone character that indicates which dial tone call progress analysis failed to
detect.
ATDX_FRQDUR( )
Returns duration of first frequency detected.
ATDX_FRQDUR2( )
Returns duration of second frequency detected.
64Voice API Programming Guide — June 2005
ATDX_FRQDUR3( )
Returns duration of third frequency detected.
ATDX_FRQHZ( )
Returns frequency detected in Hz of first detected tone.
ATDX_FRQHZ2( )
Returns frequency of second detected tone.
ATDX_FRQHZ3( )
Returns frequency of third detected tone.
ATDX_LONGLOW( )
Returns duration of longer silence.
ATDX_FRQOUT( )
Returns percent of frequency out of bounds.
ATDX_SHORTLO( )
Returns duration of shorter silence.
ATDX_SIZEHI( )
Returns duration of non-silence.
Call Progress Analysis
See each function reference description in the Voice API Library Reference for more information.
For a discussion of how frequency and cadence information returned by these extended attribute
functions relate to the DX_CAP parameters, see Section 7.12, “Media Tone Detection on
Springware Boards”, on page 69 and Section 7.15, “SIT Frequency Detection (Springware Only)”,
on page 72.
7.11Call Progress Analysis Tone Detection on
Springware Boards
Tone detection in PerfectCall call progress analysis differs from the one in basic call progress
analysis. The following topics discuss tone detection used in PerfectCall call progress analysis on
Springware boards:
• Tone Detection Overview
• Types of Tones
• Dial Tone Detection
• Ringback Detection
• Busy Tone Detection
• Fax or Modem Tone Detection
• Loop Current Detection
Voice API Programming Guide — June 200565
Call Progress Analysis
7.11.1Tone Detection Overview
PerfectCall call progress analysis uses a combination of cadence detection and frequency detection
to identify certain signals during the course of an outgoing call. Cadence detection identifies
repeating patterns of sound and silence, and frequency detection determines the pitch of the signal.
Together, the cadence and frequency of a signal make up its “tone definition”.
Unlike basic call progress analysis, which uses fields in the DX_CAP structure to store signal
cadence information, PerfectCall call progress analysis uses tone definitions which are contained in
the voice driver itself. Functions are available to modify these default tone definitions.
7.11.2Types of Tones
Tone definitions are used to identify several kinds of signals.
The following defined tones and tone identifiers are provided by the voice library on Springware
boards. Tone identifiers are returned by the ATDX_CRTNID( ) function.
TID_BUSY1
Busy signal
TID_BUSY2
Alternate busy signal
TID_DIAL_INTL
International dial tone
TID_DIAL_LCL
Local dial tone
TID_DIAL_XTRA
Special (extra) dial tone
TID_FAX1
CNG (calling) fax tone or modem tone
TID_FAX2
CED (called station) fax tone or modem tone
TID_RNGBK1
Ringback
TID_RNGBK2
Ringback
The tone identifiers are used as input to function calls to change the tone definitions. For more
information, see Section 7.14, “Modifying Default Call Progress Analysis Tone Definitions on
Springware Boards”, on page 71.
66Voice API Programming Guide — June 2005
7.11.3Dial Tone Detection
Wherever call progress analysis is in effect, a dial string for an outgoing call may specify special
ASCII characters that instruct the system to wait for a certain kind of dial tone. The following
additional special characters may appear in a dial string:
L
wait for a local dial tone
I
wait for an international dial tone
X
wait for a special (“extra”) dial tone
The tone definitions for each of these dial tones is set for each channel at the time of the
dx_initcallp( ) function. In addition, the following DX_CAP fields identify how long to wait for a
dial tone, and how long the dial tone must remain stable.
ca_dtn_pres
Dial Tone Present: the length of time that the dial tone must be continuously present (in 10
msec units). If a dial tone is present for this amount of time, dialing of the dial string proceeds.
Default value: 100 (one second).
Call Progress Analysis
ca_dtn_npres
Dial Tone Not Present: the length of time to wait before declaring the dial tone not present (in
10 msec units). If a dial tone of sufficient length (ca_dtn_pres) is not found within this period
of time, call progress analysis terminates with the reason CR_NODIALTONE. The dial tone
character (L, I, or X) for the missing dial tone can be obtained using ATDX_DTNFAIL( ).
Default value: 300 (three seconds).
ca_dtn_deboff
Dial Tone Debounce: the maximum duration of a break in an otherwise continuous dial tone
before it is considered invalid (in 10 msec units). This parameter is used for ignoring short
drops in dial tone. If a drop longer than ca_dtn_deboff occurs, then dial tone is no longer
considered present, and another dial tone must begin and be continuous for ca_dtn_pres.
Default value: 10 (100 msec).
7.11.4Ringback Detection
Call progress analysis uses the tone definition for ringback to identify the first ringback signal of an
outgoing call. At the end of the first ringback (that is, normally, at the beginning of the second
ringback), a timer goes into effect. The system continues to identify ringback signals (but does not
count them). If a break occurs in the ringback cadence, the call is assumed to have been answered,
and call progress analysis terminates with the reason CR_CNCT (connect); the connection type
returned by the ATDX_CONNTYPE( ) function will be CON_CAD (cadence break).
However, if the timer expires before a connect is detected, then the call is deemed unanswered, and
call progress analysis terminates with the reason CR_NOANS.
To enable ringback detection, turn on SIT frequency detection in the DX_CAP ca_intflg field. For
details, see Section 7.10.2, “Setting Up Call Progress Analysis Features in DX_CAP”, on page 61.
Voice API Programming Guide — June 200567
Call Progress Analysis
The following DX_CAP fields govern ringback behavior:
ca_stdely
Start Delay: the delay after dialing has been completed before starting cadence detection,
frequency detection, and positive voice detection (in 10 msec units). Default: 25 (0.25
seconds).
ca_cnosig
Continuous No Signal: the maximum length of silence (no signal) allowed immediately after
the ca_stdely period (in 10 msec units). If this duration is exceeded, call progress analysis is
terminated with the reason CR_NORB (no ringback detected). Default value: 4000 (40
seconds).
ca_noanswer
No Answer: the length of time to wait after the first ringback before deciding that the call is
not answered (in 10 msec units). If this duration is exceeded, call progress analysis is
terminated with the reason CR_NOANS (no answer). Default value: 3000 (30 seconds).
ca_maxintering
Maximum Inter-ring: the maximum length of time to wait between consecutive ringback
signals (in 10 msec units). If this duration is exceeded, call progress analysis is terminated with
the reason CR_CNCT (connected). Default value: 800 (8 seconds).
7.11.5Busy Tone Detection
Call progress analysis specifies two busy tones: TID_BUSY1 and TID_BUSY2. If either of them is
detected while frequency detection and cadence detection are active, then call progress is
terminated with the reason CR_BUSY. ATDX_CRTNID( ) identifies which busy tone was
detected.
To enable busy tone detection, turn on SIT frequency detection in the DX_CAP ca_intflg field. For
details, see Section 7.10.2, “Setting Up Call Progress Analysis Features in DX_CAP”, on page 61.
7.11.6Fax or Modem Tone Detection
Two tones are defined: TID_FAX1 and TID_FAX2. If either of these tones is detected while
frequency detection and cadence detection are active, then call progress is terminated with the
reason CR_FAXTONE. ATDX_CRTNID( ) identifies which fax or modem tone was detected.
To enable fax or modem tone detection, turn on SIT frequency detection in the DX_CAP ca_intflg
field. For details, see Section 7.10.2, “Setting Up Call Progress Analysis Features in DX_CAP”, on
page 61.
7.11.7Loop Current Detection
The dx_dial( ) function does not support loop current detection on DM3 boards.
68Voice API Programming Guide — June 2005
Call Progress Analysis
Some telephone systems return a momentary drop in loop current when a connection has been
established (answer supervision). Loop current detection returns a connect when a transient loop
current drop is detected.
In some environments, including most PBXs, answer supervision is not provided. In these
environments, Loop current detection will not function. Check with your Central Office or PBX
supplier to see if answer supervision based on loop current changes is available.
In some cases, the application may receive one or more transient loop current drops before an
actual connection occurs. This is particularly true when dialing long-distance numbers, when the
call may be routed through several different switches. Any one of these switches may be capable of
generating a momentary drop in loop current.
To disable loop current detection, set DX_CAP ca_lcdly to -1.
Note: For applications that use loop current reversal to signal a disconnect, it is recommended that
DXBD_MINLCOFF be set to 2 to prevent Loop Current On and Loop Current Off from being
reported instead of Loop Current Reversal.
7.11.7.1Loop Current Detection Parameters Affecting a Connect
To prevent detecting a connect prematurely or falsely due to a spurious loop current drop, you can
delay the start of loop current detection by using the parameter ca_lcdly.
Loop current detection returns a connect after detecting a loop current drop. To allow the person
who answered the phone to say “hello” before the application proceeds, you can delay the return of
the connect by using the parameter ca_lcdly1.
ca_lcdly
Loop Current Delay: the delay after dialing has been completed and before beginning Loop
Current Detection. To disable loop current detection, set to -1. Default: 400 (10 msec units).
ca_lcdly1
Loop Current Delay 1: the delay after loop current detection detects a transient drop in loop
current and before call progress analysis returns a connect to the application. Default: 10 (10
msec units).
If the ATDX_CONNTYPE( ) function returns CON_LPC, the connect was due to loop current
detection.
Note: When a connect is detected through positive voice detection or loop current detection, the
DX_CAP parameters ca_hedge, ca_ansrdgl, and ca_maxansr are ignored.
7.12Media Tone Detection on Springware Boards
Media tone detection in call progress analysis is discussed in the following topics:
• Positive Voice Detection (PVD)
• Positive Answering Machine Detection (PAMD)
Voice API Programming Guide — June 200569
Call Progress Analysis
7.12.1Positive Voice Detection (PVD)
Positive voice detection (PVD) can detect when a call has been answered by determining whether
an audio signal is present that has the characteristics of a live or recorded human voice. This
provides a very precise method for identifying when a connect occurs.
The ca_intflg field in DX_CAP enables/disables PVD. For information on enabling PVD, see
Section 7.10.2, “Setting Up Call Progress Analysis Features in DX_CAP”, on page 61.
PVD is especially useful in those situations where answer supervision is not available for loop
current detection to identify a connect, and where the cadence is not clearly broken for cadence
detection to identify a connect (for example, when the nonsilence of the cadence is immediately
followed by the nonsilence of speech).
If the ATDX_CONNTYPE( ) function returns CON_PVD, the connect was due to positive voice
detection.
7.12.2Positive Answering Machine Detection (PAMD)
Whenever PAMD is enabled, positive voice detection (PVD) is also enabled.
The ca_intflg field in DX_CAP enables/disables PAMD and PVD. For information on enabling
PAMD, see Section 7.10.2, “Setting Up Call Progress Analysis Features in DX_CAP”, on page 61.
When enabled, detection of an answering machine will result in the termination of call analysis
with the reason CR_CNCT (connected); the connection type returned by the
ATDX_CONNTYPE( ) function will be CON_PAMD.
The following DX_CAP fields govern positive answering machine detection:
ca_pamd_spdval
PAMD Speed Value: To distinguish between a greeting by a live human and one by an
answering machine, use one of the following settings:
• PAMD_FULL – look at the greeting (long method). The long method looks at the full
greeting to determine whether it came from a human or a machine. Using PAMD_FULL
gives a very accurate determination; however, in situations where a fast decision is more
important than accuracy, PAMD_QUICK might be preferred.
• PAMD_QUICK – look at connect only (quick method). The quick method examines only
the events surrounding the connect time and makes a rapid judgment as to whether or not
an answering machine is involved.
• PAMD_ACCU – look at the greeting (long method) and use the most accuracy for
detecting an answering machine. This setting provides the most accurate evaluation. It
detects live voice as accurately as PAMD_FULL but is more accurate than PAMD_FULL
(although slightly slower) in detecting an answering machine. Use the setting
PAMD_ACCU when accuracy is more important than speed.
Default value (Springware boards): PAMD_FULL
The recommended setting for the call analysis parameter structure (DX_CAP)
ca_pamd_spdval field is PAMD_ACCU.
70Voice API Programming Guide — June 2005
Call Progress Analysis
ca_pamd_qtemp
PAMD Qualification Template: the algorithm to use in PAMD. At present there is only one
template: PAMD_QUAL1TMP. This parameter must be set to this value.
ca_pamd_failtime
maximum time to wait for positive answering machine detection or positive voice detection
after a cadence break. Default Value: 400 (in 10 msec units).
ca_pamd_minring
minimum allowable ring duration for positive answering machine detection. Default Value:
190 (in 10 msec units).
7.13Default Call Progress Analysis Tone Definitions on
Springware Boards
Table 7 provides call progress analysis default tone definitions for Springware boards. Frequencies
are specified in Hz, durations in 10 msec units, and repetitions in integers. For information on
manipulating these tone definitions, see Section 7.14, “Modifying Default Call Progress Analysis
Tone Definitions on Springware Boards”, on page 71.
Table 7. Default Call Progress Analysis Tone Definitions (Springware)
Tone ID
TID_BUSY1500 ± 20055 ± 4055 ± 404
TID_BUSY2500 ± 200500 ± 20055 ± 4055 ± 404
TID_DIAL_LCL400 ± 125
TID_DIAL_INTL402 ± 125
TID_DIAL_XTRA401 ± 125
TID_DISCONNECT500 ± 200500 ± 20055 ± 4055 ± 404
TID_FAX11650 ± 10020 ± 20
TID_FAX21100 ± 5025 ± 25
TID_RNGBK1450 ± 150130 ± 105580 ± 415
TID_RNGBK2450 ± 150450 ± 150130 ± 105580 ± 415
Freq1
(in Hz)
Freq2
(in Hz)
On Time
(in 10 msec)
Off Time
(in 10 msec)
Reps
7.14Modifying Default Call Progress Analysis Tone
Definitions on Springware Boards
On Springware boards, call progress analysis makes use of global tone detection (GTD) tone
definitions for three different types of dial tones, two busy tones, one ringback tone, and two fax
tones. The tone definitions specify the frequencies, durations, and repetition counts necessary to
identify each of these signals. Each signal may consist of a single tone or a dual tone.
Voice API Programming Guide — June 200571
Call Progress Analysis
The voice driver contains default definitions for each of these tones. The default definitions will
allow applications to identify the tones correctly in most countries and for most switching
equipment. However, if a situation arises in which the default tone definitions are not adequate,
three functions are provided to modify the standard tone definitions:
dx_chgfreq( )
specifies frequencies and tolerances for one or both frequencies of a single- or dual-frequency
tone
dx_chgdur( )
specifies the cadence (on time, off time, and acceptable deviations) for a tone
dx_chgrepcnt( )
specifies the repetition count required to identify a tone
These functions can be used to modify the tone definitions shown in Table 7, “Default Call
Progress Analysis Tone Definitions (Springware)”, on page 71. These functions only change the
tone definitions; they do not alter the behavior of call progress analysis itself. When the
dx_initcallp( ) function is invoked to activate call progress analysis on a particular channel, it uses
the current tone definitions to initialize that channel. Multiple calls to dx_initcallp( ) may therefore
use varying tone definitions, and several channels can operate simultaneously with different tone
definitions.
For more information on tones and tone detection, see Section 7.11, “Call Progress Analysis Tone
Detection on Springware Boards”, on page 65.
Note: The Learn Mode API and Tone Set File (TSF) API provide a more comprehensive way to manage
call progress tones, in particular the unique call progress tones produced by PBXs, key systems,
and PSTNs. Applications can learn tone characteristics using the Learn Mode API. Information on
several different tones forms one tone set. Tone sets can be written to a tone set file using the Tone
Set File API. For more information, see the Learn Mode and Tone Set File API Software Reference for Linux and Windows Operating Systems.
7.15SIT Frequency Detection (Springware Only)
Special Information Tone (SIT) frequency detection is a component of call progress analysis. The
following topics provide more information on this component:
• Tri-Tone SIT Sequences
• Setting Tri-Tone SIT Frequency Detection Parameters
• Obtaining Tri-Tone SIT Frequency Information
• Global Tone Detection Tone Memory Usage
• Frequency Detection Errors
• Setting Single Tone Frequency Detection Parameters
• Obtaining Single Tone Frequency Information
72Voice API Programming Guide — June 2005
7.15.1Tri-Tone SIT Sequences
SIT frequency detection operates simultaneously with all other call progress analysis detection
methods. The purpose of frequency detection is to detect the tri-tone special information tone (SIT)
sequences and other single-frequency tones. Detection of a SIT sequence indicates an operator
intercept or other problem in completing the call.
SIT frequency detection can detect virtually any single-frequency tone below 2100 Hz and above
300 Hz.
Table 8 provides tone information for the four SIT sequences on Springware boards. The
frequencies are represented in Hz and the length of the signal is in 10 msec units. The length of the
first segment is not dependable; often it is shortened or cut.
On DM3 boards, SIT sequences are defined as tone IDs. For a definition of SIT sequences on DM3
boards, see Table 5, “Special Information Tone Sequences (DM3)”, on page 54.
Table 8. Special Information Tone Sequences (Springware)
SIT1st Segment2nd Segment3rd Segment
NameDescriptionFreq.Len.Freq.Len.Freq.Len.
NCNo Circuit Found98538142938177738
ICOperator
Intercept
VCVacant Circuit98538137027177738
ROReorder
(system busy)
91427137127177738
91427142938177738
Call Progress Analysis
7.15.2Setting Tri-Tone SIT Frequency Detection Parameters
On Springware boards, frequency detection on voice boards is designed to detect all three tones in
a tri-tone SIT sequence. To detect all three tones in a SIT sequence, you must specify the frequency
detection parameters in the DX_CAP for all three tones in the sequence.
To detect all four tri-tone SIT sequences:
1. Set an appropriate frequency detection range in the DX_CAP to detect each tone across all
four SIT sequences. Set the first frequency detection range to detect the first tone for all four
SIT sequences (approximately 900 to 1000 Hz). Set the second frequency detection range to
detect the second tone for all four SIT sequences (approximately 1350 to 1450 Hz). Set the
third frequency detection range to detect the third tone for all four SIT sequences
(approximately 1725 to 1825 Hz).
2. Set an appropriate detection time using the ca_timefrq and ca_mxtimefrq parameters to detect
each tone across all four SIT sequences. For each tone, set ca_timefrq to 5 and ca_mxtimefrq
to 50 to detect all SIT tones. The tones range in length from 27 to 38 (in 10 msec units), with
some tones occasionally cut short by the Central Office.
Note: Occasionally, the first tone can also be truncated by a delay in the onset of call
progress analysis due to the setting of ca_stdely.
Voice API Programming Guide — June 200573
Call Progress Analysis
3. After a SIT sequence is detected, ATDX_CPTERM( ) will return CR_CEPT to indicate an
operator intercept, and you can determine which SIT sequence was detected by obtaining the
actual detected frequency and duration for the tri-tone sequence using extended attribute
functions. These functions are described in detail in the Voice API Library Reference.
The following fields in the DX_CAP are used for frequency detection on voice boards. Frequencies
are specified in Hertz, and time is specified in 10 msec units. To enable detection of the second and
third tones, you must set the frequency detection range and time for each tone.
General
The following field in the DX_CAP is used for frequency detection on voice boards.
ca_stdely
Start Delay. The delay after dialing has been completed and before starting frequency
detection. This parameter also determines the start of cadence detection and positive voice
detection. Note that this can affect detection of the first element of an operator intercept tone.
Default: 25 (10 msec units).
First Tone
The following fields in the DX_CAP are used for frequency detection for the first tone.
Frequencies are specified in Hertz, and time is specified in 10 msec units.
ca_lowerfrq
Lower bound for first tone in Hz.
Default: 900.
ca_upperfrq
Upper bound for first tone in Hz. Adjust higher for additional operator intercept tones.
Default: 1000.
ca_timefrq
Minimum time for first tone to remain in bounds. The minimum amount of time required for
the audio signal to remain within the frequency detection range for it to be detected. The audio
signal must not be greater than ca_upperfrq or lower than ca_lowerfrq for at least the time
interval specified in ca_timefrq.
Default: 5 (10 msec units).
ca_mxtimefrq
Maximum allowable time for first tone to be present.
Default: 0 (10 msec units).
Second Tone
The following fields in the DX_CAP are used for frequency detection for the second tone.
Frequencies are specified in Hertz, and time is specified in 10 msec units. To enable detection of
the second and third tones, you must set the frequency detection range and time for each tone.
Note: This tone is disabled initially and must be activated by the application using these variables.
74Voice API Programming Guide — June 2005
Call Progress Analysis
ca_lower2frq
Lower bound for second tone in Hz. Default: 0.
ca_upper2frq
Upper bound for second tone in Hz. Default: 0.
ca_time2frq
Minimum time for second tone to remain in bounds. Default: 0 (10 msec units).
ca_mxtime2frq
Maximum allowable time for second tone to be present. Default: 0 (10 msec units).
Third Tone
The following fields in the DX_CAP are used for frequency detection for the third tone.
Frequencies are specified in Hertz, and time is specified in 10 msec units. To enable detection of
the second and third tones, you must set the frequency detection range and time for each tone.
Note: This tone is disabled initially and must be activated by the application using these variables.
ca_lower3frq
Lower bound for third tone in Hz. Default: 0.
ca_upper3frq
Upper bound for third tone in Hz. Default: 0.
ca_time3frq
Minimum time for third tone to remain in bounds. Default: 0 (10 msec units).
ca_mxtime3frq
Maximum allowable time for third tone to be present. Default: 0 (10 msec units).
7.15.3Obtaining Tri-Tone SIT Frequency Information
Upon detection of the specified sequence of frequencies, you can use extended attribute functions
to provide the exact frequency and duration of each tone in the sequence. The frequency and
duration information will allow exact determination of all four SIT sequences.
The following extended attribute functions are used to provide information on the frequencies
detected by call progress analysis.
ATDX_FRQHZ( )
Frequency in Hz of the tone detected in the tone detection range specified by the DX_CAP
ca_lowerfrq and ca_upperfrq parameters; usually the first tone of an SIT sequence. This
function can be called on non-DSP boards.
ATDX_FRQDUR( )
Duration of the tone detected in the tone detection range specified by the DX_CAP
ca_lowerfrq and ca_upperfrq parameters; usually the first tone of an SIT sequence (10 msec
units).
ATDX_FRQHZ2( )
Frequency in Hz of the tone detected in the tone detection range specified by the DX_CAP
ca_lower2frq and ca_upper2frq parameters; usually the second tone of an SIT sequence.
Voice API Programming Guide — June 200575
Call Progress Analysis
ATDX_FRQDUR2( )
Duration of the tone detected in the tone detection range specified by the DX_CAP
ca_lower2frq and ca_upper2frq parameters; usually the second tone of an SIT sequence (10
msec units).
ATDX_FRQHZ3( )
Frequency in Hz of the tone detected in the tone detection range specified by the DX_CAP
ca_lower3frq and ca_upper3frq parameters; usually the third tone of an SIT sequence.
ATDX_FRQDUR3( )
Duration of the tone detected in the tone detection range specified by the DX_CAP
ca_lower3frq and ca_upper3frq parameters; usually the third tone of an SIT sequence (10
msec units).
7.15.4Global Tone Detection Tone Memory Usage
The information in this section does not apply to DM3 boards.
If you use call progress analysis to identify the tri-tone SIT sequences, call progress analysis will
create tone detection templates internally, and this will reduce the number of tone templates that
can be created using Global Tone Detection functions. See Chapter 13, “Global Tone Detection and
Generation, and Cadenced Tone Generation” for information relating to memory usage for Global
Tone Detection.
Call progress analysis will create one tone detection template for each single-frequency tone with a
100 Hz detection range. For example, if detecting the set of tri-tone SIT sequences (three
frequencies) on each of four channels, the number of allowable user-defined tones will be reduced
by three per channel.
If you initiate call progress analysis and there is not enough memory to create the SIT tone
detection templates internally, you will get a CR_MEMERR error. This indicates that you are
trying to exceed the maximum number of tone detection templates. The tone detection range
should be limited to a maximum of 100 Hz per tone to reduce the chance of exceeding the available
memory.
7.15.5Frequency Detection Errors
The information in this section does not apply to DM3 boards, as the DX_CAP fields mentioned in
this section are not supported on DM3 boards.
The frequency detection range specified by the lower and upper bounds for each tone cannot
overlap; otherwise, an error will be produced when the driver attempts to create the internal tone
detection templates. For example, if ca_upperfrq is 1000 and ca_lower2frq is also 1000, an overlap
occurs and will result in an error. Also, the lower bound of each frequency detection range must be
less than the upper bound (for example, ca_lower2frq must be less than ca_upper2frq).
76Voice API Programming Guide — June 2005
Call Progress Analysis
7.15.6Setting Single Tone Frequency Detection Parameters
The information in this section does not apply to DM3 boards, as the DX_CAP fields mentioned in
this section are not supported on DM3 boards.
The following paragraphs describe how to set single tone frequency detection on Springware
boards.
Setting single tone frequency detection parameters allows you to identify that a SIT sequence was
encountered because one of the tri-tones in the SIT sequence was detected. But frequency detection
cannot determine exactly which SIT sequence was encountered, because it is necessary to identify
two tones in the SIT sequence to distinguish among the four possible SIT sequences.
The default frequency detection range is 900-1000 Hz, which is set to detect the first tone in any
SIT sequence. Because the first tone is often truncated, you may want to increase ca_upperfrq to
1800 Hz so that it includes the third tone. If this results in too many false detections, you can set
frequency detection to detect only the third tone by setting ca_lowerfrq to 1750 and ca_upperfrq to
1800.
The following fields in the DX_CAP are used for frequency detection. Frequencies are specified in
Hertz, and time is specified in 10 msec units.
ca_stdely
Start Delay: the delay after dialing has been completed and before starting frequency
detection. This parameter also determines the start of cadence detection. Default: 25 (10 msec
units).
ca_lowerfrq
lower bound for tone in Hz. Default: 900.
ca_upperfrq
upper bound for tone in Hz. Default: 1000.
ca_timefrq
time frequency. Minimum time for 1st tone in an SIT to remain in bounds. The minimum
amount of time required for the audio signal to remain within the frequency detection range
specified by ca_upperfrq and ca_lowerfrq for it to be considered valid. Default: 5 (10 msec
units)
7.15.7Obtaining Single Tone Frequency Information
The information in this section does not apply to DM3 boards, as the DX_CAP fields mentioned in
this section are not supported on DM3 boards.
Upon detection of a frequency in the specified range, you can use the ATDX_FRQHZ( ) extended
attribute function to return the frequency in Hz of the tone detected in the range specified by the
DX_CAP ca_lowerfrq and ca_upperfrq parameters. The frequency returned is usually the first tone
of an SIT sequence.
Voice API Programming Guide — June 200577
Call Progress Analysis
7.16Cadence Detection in Basic Call Progress Analysis
(Springware Only)
Cadence detection is a component of basic call progress analysis. The following topics discuss
cadence detection and some of the most commonly adjusted cadence detection parameters in basic
call progress analysis:
• Overview
• Typical Cadence Patterns
• Elements of a Cadence
• Outcomes of Cadence Detection
• Setting Selected Cadence Detection Parameters
• Obtaining Cadence Information
7.16.1Overview
The cadence detection algorithm has been optimized for use in the United States standard network
environment.
Caution: This discussion of cadence detection in basic call progress analysis is provided for backward
compatibility purposes only. You should not develop new applications based on basic call progress
analysis. Instead you should use PerfectCall call progress analysis. For information on cadence
detection in PerfectCall call progress analysis, see Section 7.11, “Call Progress Analysis Tone
Detection on Springware Boards”, on page 65.
If your system is operating in another type of environment (such as behind a PBX), you can
customize the cadence detection algorithm to suit your system through the adjustment of the
cadence detection parameters.
Cadence detection analyzes the audio signal on the line to detect a repeating pattern of sound and
silence, such as the pattern produced by a ringback or a busy signal. These patterns are called audio cadences. Once a cadence has been established, it can be classified as a single ring, a double ring,
or a busy signal by comparing the periods of sound and silence to established parameters.
Notes: 1. Sound is referred to as nonsilence.
2. The algorithm used for cadence detection is disclosed and protected under U.S. patent 4,477,698
of Melissa Electronic Labs, and other patents pending.
7.16.2Typical Cadence Patterns
Figure 6, Figure 7, and Figure 8 show some typical cadence patterns for a standard busy signal, a
standard single ring, and a double ring.
78Voice API Programming Guide — June 2005
Figure 6. A Standard Busy Signal
Call Progress Analysis
50
nonsilence
silence
The timings are given in units of 10ms.
Figure 7. A Standard Single Ring
200
nonsilence
silence
The timings are given in units of 10ms.
≈
Figure 8. A Type of Double Ring
50
50
50
5050
400
≈
50
200
≈
50
nonsilence
silence
The timings are given in units of 10ms.
50
25
7.16.3Elements of a Cadence
From the preceding cadence examples, you can see that a given cadence may contain two silence
periods with different durations, such as for a double ring; but in general, the nonsilence periods
have the same duration. To identify and distinguish between the different types of cadences, the
voice driver must detect two silence and two nonsilence periods in the audio signal. Figure 9
illustrates cadence detection.
50
225
≈
Voice API Programming Guide — June 200579
Call Progress Analysis
Figure 9. Cadence Detection
Dialing
Complete
Period Used to
Establish Cadence
nonsilence
silence
Once the cadence is established, the cadence values can be retrieved using the following extended
attribute functions:
ATDX_SIZEHI( )
length of the nonsilence period (in 10 msec units) for the detected cadence
ATDX_SHORTLOW( )
length of the shortest silence period for the detected cadence (in 10 msec units)
ATDX_LONGLOW( )
length of the longest silence period for the detected cadence (in 10 msec units).
Only one nonsilence period is used to define the cadence because the nonsilence periods have the
same duration.
Figure 10shows the elements of an established cadence.
Figure 10. Elements of Established Cadence
Periods Compared to the
Established Cadence
nonsilence
silence
ATDX_LONGLOW
225
≈
The timings are given in units of 10ms.
50
ATDX_SHORTLOWATDX_SIZEHIGH
50
25
The durations of subsequent states are compared with these fields to see if the cadence has been
broken.
80Voice API Programming Guide — June 2005
7.16.4Outcomes of Cadence Detection
Cadence detection can identify the following conditions during the period used to establish the
cadence or after the cadence has been established:
• No Ringback
• Connect
• Busy
• No Answer
Although loop current detection and positive voice detection provide complementary means of
detecting a connect, cadence detection provides the only way in basic call progress analysis to
detect a no ringback, busy, or no answer.
Cadence detection can identify the following conditions during the period used to establish the
cadence:
No Ringback
While the cadence is being established, cadence detection determines whether the signal is
continuous silence or nonsilence. In this case, cadence detection returns a no ringback,
indicating there is a problem in completing the call.
Call Progress Analysis
Connect
While the cadence is being established, cadence detection determines whether the audio signal
departs from acceptable network standards for busy or ring signals. In this case, cadence
detection returns a connect, indicating that there was a “break” from general cadence
standards.
Cadence detection can identify the following conditions after the cadence has been established:
Connect
After the cadence has been established, cadence detection determines whether the audio signal
departs from the established cadence. In this case, cadence detection returns a connect,
indicating that there was a break in the established cadence.
No Answer
After the cadence has been established, cadence detection determines whether the cadence
belongs to a single or double ring. In this case, cadence detection can return a no answer,
indicating there was no break in the ring cadence for a specified number of times.
Busy
After the cadence has been established, cadence detection determines whether the cadence
belongs to a slow busy signal. In this case, cadence detection can return a busy, indicating that
the busy cadence was repeated for a specified number of times.
To determine whether the ring cadence is a double or single ring, compare the value returned by the
ATDX_SHORTLOW( ) function to the DX_CAP field ca_lo2rmin. If the
ATDX_SHORTLOW( ) value is less than ca_lo2rmin, the cadence is a double ring; otherwise, it
Only the most commonly adjusted cadence detection parameters are discussed here. For a complete
listing and description of the DX_CAP data structure, see the Voice API Library Reference.
You should only need to adjust cadence detection parameters for network environments that do not
conform to the U.S. standard network environment (such as behind a PBX).
7.16.5.1General Cadence Detection Parameters
The following are general cadence detection parameters in DX_CAP:
ca_stdely
Start Delay: the delay after dialing has been completed and before starting cadence detection.
This parameter also determines the start of frequency detection and positive voice detection.
Default: 25 (10 msec units) = 0.25 seconds.
Be careful with this variable. Setting this variable too small may allow switching transients or,
if too long, miss critical signaling.
ca_higltch
High Glitch: the maximum nonsilence period to ignore. Used to help eliminate spurious
nonsilence intervals. Default: 19 (in 10 msec units).
To eliminate audio signal glitches over the telephone line, the parameters ca_logltch and
ca_higltch are used to determine the minimum acceptable length of a valid silence or
nonsilence duration. Any silence interval shorter than ca_logltch is ignored, and any
nonsilence interval shorter than ca_higltch is ignored.
ca_logltch
Low Glitch: the maximum silence period to ignore. Used to help eliminate spurious silence
intervals. Default: 15 (in 10 msec units).
7.16.5.2Cadence Detection Parameters Affecting a No Ringback
After cadence detection begins, it waits for an audio signal of nonsilence. The maximum waiting
time is determined by the parameter ca_cnosig (continuous no signal). If the length of this period of
silence exceeds the value of ca_cnosig, a no ringback is returned. Figure 11illustrates this. This
usually indicates a dead or disconnected telephone line or some other system malfunction.
ca_cnosig
Continuous No Signal: the maximum time of silence (no signal) allowed immediately after
cadence detection begins. If exceeded, a no ringback is returned. Default: 4000 (in 10 msec
units), or 40 seconds.
82Voice API Programming Guide — June 2005
Figure 11. No Ringback Due to Continuous No Signal
Dialing
Complete
nonsilence
CA_STDELY
250
CA_CNOSIG
4000
≈
silence
Call Progress Analysis
No Ringback
Returned
The timings are given in units of 10ms.
If the length of any period of nonsilence exceeds the value of ca_cnosil (continuous nonsilence), a
no ringback is returned, shown in Figure 12.
ca_cnosil
Continuous Nonsilence: the maximum length of nonsilence allowed. If exceeded, a no
ringback is returned. Default: 650 (in 10 msec units), or 6.5 seconds.
Figure 12. No Ringback Due to Continuous Nonsilence
CA_CNOSIL
650
nonsilence
No Ringback
Returned
silence
The timings are given in units of 10ms.
7.16.5.3Cadence Detection Parameters Affecting a No Answer or Busy
By using the ca_nbrdna parameter, you can set the maximum number of ring cadence repetitions
that will be detected before returning a no answer.
By using the ca_nbrdna and ca_nsbusy parameters, you can set the maximum number of busy
cadence repetitions.
ca_nbrdna
Number of Rings Before Detecting No Answer: the number of single or double rings to wait
before returning a no answer. Default: 4.
Voice API Programming Guide — June 200583
Call Progress Analysis
ca_nsbusy
Nonsilence Busy: the number of nonsilence periods in addition to ca_nbrdna to wait before
returning a busy. Default: 0. ca_nsbusy is added to ca_nbrdna to give the actual number of
busy cadences at which to return busy. Note that even though ca_nsbusy is declared as an
unsigned variable, it can be a small negative number.
Do not allow ca_nbrdna + ca_nsbusy to equal 2. This is a foible of the 2’s complement bit
mapping of a small negative number to an unsigned variable.
7.16.5.4Cadence Detection Parameters Affecting a Connect
You can cause cadence detection to measure the length of the salutation when the phone is
answered. The salutation is the greeting when a person answers the phone, or an announcement
when an answering machine or computer answers the phone.
By examining the length of the greeting or salutation you receive when the phone is answered, you
may be able to distinguish between an answer at home, at a business, or by an answering machine.
The length of the salutation is returned by the ATDX_ANSRSIZ( ) function. By examining the
value returned, you can estimate the kind of answer that was received.
Normally, a person at home will answer the phone with a brief salutation that lasts about 1 second,
such as “Hello” or “Smith Residence.” A business will usually answer the phone with a longer
greeting that lasts from 1.5 to 3 seconds, such as “Good afternoon, Intel Corporation.” An
answering machine or computer will usually play an extended message that lasts more than 3 or 4
seconds.
This method is not 100% accurate, for the following reasons:
• The length of the salutation can vary greatly.
• A pause in the middle of the salutation can cause a premature connect event.
• If the phone is picked up in the middle of a ringback, the ringback tone may be considered part
of the salutation, making the ATDX_ANSRSIZ( ) return value inaccurate.
In the last case, if someone answers the phone in the middle of a ring and quickly says “Hello”, the
nonsilence of the ring will be indistinguishable from the nonsilence of voice that immediately
follows, and the resulting ATDX_ANSRSIZ( ) return value may include both the partial ring and
the voice. In this case, the return value may deviate from the actual salutation by 0 to +1.8 seconds.
The salutation would appear to be the same as when someone answers the phone after a full ring
and says two words.
Note: A return value of 180 to 480 may deviate from the actual length of the salutation by 0 to +1.8
seconds.
84Voice API Programming Guide — June 2005
Call Progress Analysis
Cadence detection will measure the length of the salutation when the ca_hedge (hello edge)
parameter is set to 2 (the default).
ca_hedge
Hello Edge: the point at which a connect will be returned to the application, either the rising
edge (immediately when a connect is detected) or the falling edge (after the end of the
salutation).
1 = rising edge. 2 = falling edge. Default: 2 (connect returned on falling edge of salutation).
Try changing this if the called party has to say “Hello” twice to trigger the answer event.
Because a greeting might consist of several words, call progress analysis waits for a specified
period of silence before assuming the salutation is finished. The ca_ansrdgl (answer deglitcher)
parameter determines when the end of the salutation occurs. This parameter specifies the maximum
amount of silence allowed in a salutation before it is determined to be the end of the salutation. To
use ca_ansrdgl, set it to approximately 50 (in 10 msec units).
ca_ansrdgl
Answer Deglitcher: the maximum silence period (in 10 msec units) allowed between words in
a salutation. This parameter should be enabled only when you are interested in measuring the
length of the salutation. Default: -1 (disabled).
The ca_maxansr (maximum answer) parameter determines the maximum allowable answer size
before returning a connect.
ca_maxansr
Maximum Answer: the maximum allowable length of ansrsize. When ansrsize exceeds
ca_maxansr, a connect is returned to the application. Default: 1000 (in 10 msec units), or 10
seconds.
Figure 13 shows how the ca_ansrdgl parameter works.
When ca_hedge = 2, cadence detection waits for the end of the salutation before returning a
connect. The end of the salutation occurs when the salutation contains a period of silence that
exceeds ca_ansrdgl or the total length of the salutation exceeds ca_maxansr. When the connect
event is returned, the length of the salutation can be retrieved using the ATDX_ANSRSIZ( )
function.
Connect Event
Returned Here
CA_ANSRDGL
Voice API Programming Guide — June 200585
Call Progress Analysis
After call progress analysis is complete, call ATDX_ANSRSIZ( ). If the return value is less than
180 (1.8 seconds), you have probably contacted a residence. A return value of 180 to 300 is
probably a business. If the return value is larger than 480, you have probably contacted an
answering machine. A return value of 0 means that a connect was returned because excessive
silence was detected. This can vary greatly in practice.
Note: When a connect is detected through positive voice detection or loop current detection, the
DX_CAP parameters ca_hedge, ca_ansrdgl, and ca_maxansr are ignored.
7.16.6Obtaining Cadence Information
The functions described in this section are not supported on DM3 boards.
To return cadence information, you can use the following extended attribute functions:
ATDX_SIZEHI( )
duration of the cadence non-silence period (in 10 msec units)
ATDX_SHORTLOW( )
duration of the cadence shorter silence period (in 10 msec units)
ATDX_LONGLOW( )
duration of the cadence longer silence period (in 10 msec units)
ATDX_ANSRSIZ( )
duration of answer if a connect occurred (in 10 msec units)
ATDX_CONNTYPE( )
connection type. If ATDX_CONNTYPE( ) returns CON_CAD, the connect was due to
cadence detection.
86Voice API Programming Guide — June 2005
8.Recording and Playback
This chapter discusses playback and recording features supported by the voice library. The
following topics are discussed:
The primary voice processing operations provided by a voice board include:
• recording: digitizing and storing human voice
• playback: retrieving, converting, and playing the stored, digital information to reconstruct the
human voice.
The following features related to voice recording and playback operation are documented in other
chapters in this document:
• Controlling when a playback or recording terminates using I/O termination conditions is
documented in Section 6.1.2, “Setting Termination Conditions for I/O Functions”, on page 32.
• Controlling the speed and volume when messages are played back is documented in Chapter 9,
“Speed and Volume Control”.
• A method for increasing access speed for retrieving and storing voice prompts is documented
in Chapter 12, “Cached Prompt Management”.
Voice API Programming Guide — June 200587
Recording and Playback
8.2Digital Recording and Playback
In digital speech recording, the voice board converts the human voice from a continuous sound
wave, or analog signal, into a digital representation. The voice board does this by frequently
sampling the amplitude of the sound wave at individual points in the speech signal.
The accuracy, and thus the quality, of the digital recording is affected by:
• the sampling rate (number of samples per second), also called digitization rate
• the precision, or resolution, of each sample (the amount of data that is used to represent 1
sample).
If the samples are taken at a greater frequency, the digital representation will be more accurate and
the voice quality will be greater. Likewise, if more bits are used to represent the sample (higher
resolution), the sample will be more accurate and the voice quality will be greater.
In digital speech playback, the voice board reconstructs the speech signal by converting the
digitized voice back into analog voltages. If the voice data is played back at the same rate at which
it was recorded, an approximation of the original speech will result.
8.3Play and Record Functions
The C language function library includes several functions for recording and playing audio data,
such as dx_rec( ), dx_reciottdata( ), dx_play( ), and dx_playiottdata( ). Recording takes audio
data from a specified channel and encodes it for storage in memory, in a file on disk, or on a custom
device. Playing decodes the stored audio data and plays it on the specified channel. The storage
location is one factor in determining which record and play functions should be used. The storage
location affects the access speed for retrieving and storing audio data.
One or more of the following data structures are used in conjunction with certain play and record
functions: DV_TPT to specify a termination condition for the function, DX_IOTT to identify a
source or destination for the data, and DX_XPB to specify the file format, data format, sampling
rate, and resolution.
For detailed information about play and record functions, which are also known as I/O functions,
see the Voice API Library Reference.
8.4Play and Record Convenience Functions
Several convenience functions are provided to make it easier to implement play and record
functionality in an application. Some examples are: dx_playf( ), dx_playvox( ), dx_playwav( ), dx_recf( ), and dx_recvox( ). These functions are specific cases of the dx_play( ) and dx_rec( )
functions and run in synchronous mode.
For example, dx_playf( ) performs a playback from a single file by specifying the filename. The
same operation can be done using dx_play( ) and specifying a DX_IOTT structure with only one
88Voice API Programming Guide — June 2005
entry for that file. Using dx_playf( ) is more convenient for a single file playback because you do
not have to set up a DX_IOTT structure for the one file and the application does not need to open
the file. dx_recf( ) provides the same single file convenience for the dx_rec( ) function.
For a complete list of I/O convenience functions and function reference information, see the Voice API Library Reference.
8.5Voice Encoding Methods
A digitized audio recording is characterized by several parameters as follows:
• the number of samples per second, or sampling rate
• the number of bits used to store a sample, or resolution
• the rate at which data is recorded or played
There are many encoding and storage schemes available for digitized voice. The voice encoding
methods or data formats supported on DM3 boards are listed in Table 9.
Recording and Playback
Voice API Programming Guide — June 200589
Recording and Playback
Table 9. Voice Encoding Methods (DM3)
Digitizing Method
OKI ADPCM
OKI ADPCM8432VOX, WAVE
IMA ADPCM 8432VOX, WAVE
G.711 PCM
A-law and mu-law
G.711 PCM
A-law and mu-law
G.721 8432VOX, WAVE
Linear PCM
Linear PCM816128VOX, WAVE
Linear PCM
Linear PCM1116176VOX, WAVE
TrueSpeech 8168.5 VOX, WAVE
GSM 6.10 full rate
(Microsoft format)
GSM 6.10 full rate
(TIPHON format)
G.726 bit exact 8216VOX, WAVE
G.726 bit exact8324VOX, WAVE
G.726 bit exact8432VOX, WAVE
G.726 bit exact8540VOX, WAVE
6424VOX, WAVE
8864VOX, WAVE
11888VOX, WAVE
Sampling Rate
(kHz)
6848VOX, WAVE
8864VOX, WAVE
8(value ignored)13VOX, WAVE
8(value ignored)13VOX
Resolution (Bits)Bit Rate (Kbps)File Format
Note: On DM3 boards, not all voice coders are available on all boards. The availability of a voice coder
depends on the media load chosen for your board. For a comprehensive list of voice coders
supported by each board, see the Release Guide for your system release. For details on media
loads, see the Configuration Guide for your product family.
The voice encoding methods supported on Springware boards are listed in Table 10.
90Voice API Programming Guide — June 2005
Table 10. Voice Encoding Methods (Springware)
Recording and Playback
Digitizing Method
OKI ADPCM 6424VOX, WAVE
OKI ADPCM8432VOX, WAVE
G.711 PCM
A-law and mu-law
G.711 PCM
A-law and mu-law
Linear PCM
Linear PCM 11888VOX, WAVE
Linear PCM1116176VOX, WAVE
GSM 6.10 full rate
(Microsoft format)
GSM 6.10 full rate
(TIPHON format)
G.726 bit exact 8432VOX
8864VOX, WAVE
Sampling Rate
(kHz)
6848VOX, WAVE
8864VOX, WAVE
8(value ignored)13WAVE
8(value ignored)13WAVE
Resolution (Bits)Bit Rate (Kbps)File Format
Note: On Springware boards, voice coders listed here are not available in all situations on all boards, such
as for silence compressed record or speed and volume control. Whenever a restriction exists, it will
be noted. For a comprehensive list of voice coders supported by each board, see the Release Guide
for your system release.
8.6G.726 Voice Coder
G.726 is an ITU-T recommendation that specifies an adaptive differential pulse code modulation
(ADPCM) technique for recording and playing back audio files. It is useful for applications that
require speech compression, encoding for noise immunity, and uniformity in transmitting voice and
data signals.
The voice library provides support for a G.726 bit exact voice coder that is compliant with the
ITU-T G.726 recommendation.
Audio encoded in the G.726 bit exact format complies with Voice Profile for Internet Messaging
(VPIM), a communications protocol that makes it possible to send and receive messages from
disparate messaging systems over the Internet. G.726 bit exact is the audio encoding and decoding
standard supported by VPIM.
VPIM follows the little endian ordering. The 4-bit code words of the G.726 encoding must be
packed into octets/bytes as follows:
• The first code word (A) is placed in the four least significant bits of the first octet, with the
least significant bit (LSB) of the code word (A0) in the least significant bit of the octet.
• The second code word (B) is placed in the four most significant bits of the first octet, with the
most significant bit (MSB) of the code word (B3) in the most significant bit of the octet.
Voice API Programming Guide — June 200591
Recording and Playback
• Subsequent pairs of the code words are packed in the same way into successive octets, with the
first code word of each pair placed in the least significant four bits of the octet. It is preferable
to extend the voice sample with silence such that the encoded value consists of an even number
of code words. However, if the voice sample consists of an odd number of code words, then the
last code word will be discarded.
For more information on G.726 and VPIM, see RFC 3802 on the Internet Engineering Task Force
website at http://www.ietf.org.
To use the G.726 voice coder, specify the coder in the DX_XPB structure. Then use
dx_playiottdata( ) and dx_reciottdata( ) functions to play and record with this coder.
Alternatively, you can also use dx_playvox( ) and dx_recvox( ) convenience functions.
To determine the voice resource handles used with the play and record functions, use SRL device
mapper functions to return information about the structure of the system, such as a list of all
physical boards in a system, a list of all virtual boards on a physical board, and a list of all
subdevices on a virtual board.
See the Voice API Library Reference for more information on voice functions and data structures.
See the Standard Runtime Library API Library Reference for more information on SRL functions.
+--+--+--+--+--+--+--+--+
8.7Transaction Record
Transaction record enables the recording of a two-party conversation by allowing two time-division
multiplexing (TDM) bus time slots from a single channel to be recorded. This feature is useful for
call center applications where it is necessary to archive a verbal transaction or record a live
conversation. A live conversation requires two time slots on the TDM bus, but Intel voice boards
today can only record one time slot at a time. No loss of channel density is realized with this
feature. Voice activity on two channels can be summed and stored in a single file, or in a
combination of files, devices, and/or memory.
Note: Transaction record is not supported on all boards. For a list of board support, see the Release Guide
for your system release.
On DM3 boards as well as Springware boards on Windows, use the following function for
transaction record:
dx_mreciottdata( )
records voice data from two channels to a data file, memory, or custom device
92Voice API Programming Guide — June 2005
On Springware boards on Linux, use the following functions for transaction record:
dx_recm( )
records voice data from two channels to a data file, memory, or custom device
dx_recmf( )
records voice data from two channels to a single file
See the Voice API Library Reference for a full description of functions.
8.8Silence Compressed Record
The silence compressed record (SCR) feature is discussed in more detail in the following topics:
• Overview
• Enabling
• Encoding Methods Supported
8.8.1Overview
Recording and Playback
The silence compressed record feature (SCR) enables recording with silent pauses eliminated. This
results in smaller recorded files with no loss of intelligibility.
On Springware boards, when the audio level is at or falls below the silence threshold for a
minimum duration of time, SCR begins. When a short burst of noise (glitch) is detected, the
compression does not end unless the glitch is longer than a specified period of time.
On DM3 boards, the SCR algorithm is based on energy detection and zero crossing. This SCR uses
different parameters than the standard SCR. Specifically, the Pre-Compensation and De-Glitch
parameters are no longer needed, and there are additional new parameters.
The SCR algorithm operates on one msec blocks of speech and uses a two-fold approach to
determine whether a sample is speech or silence. Two probability of speech values are calculated
using a zero crossing algorithm and an energy detection algorithm. These values are put together to
calculate a combined probability of speech.
The energy detection algorithm allows you to modify the background noise threshold range.
Signals above the high threshold are declared speech, and signals below the low threshold are
declared silence.
Speech or silence is declared based on the previous sample, the current combined probability of
speech in relation to the speech probability threshold and silence probability threshold parameters
and the trailing silence parameter.
8.8.2Enabling
On DM3 boards, use dx_setparm( ) and the DXCH_SCRFEATURE define to turn silence
compressed record (SCR) on and off. Once enabled, voice record functions automatically record
Voice API Programming Guide — June 200593
Recording and Playback
with SCR. For more information on modifying SCR parameters, see the Configuration Guide for
your product or product family.
On Springware boards, you enable SCR in the voice.prm file which is downloaded to the board
during initialization. You must edit this file and set appropriate values for the SCR parameters for
use in your working environment before initializing the board. You cannot enable this feature
through the voice API. After SCR is enabled in the voice.prm file, it is automatically activated by
the use of voice record functions such as dx_rec( ).
On Springware boards, the SCR parameters specify the silence threshold, the duration of silence at
the end of speech before silence compression begins, the duration of a glitch in the line which does
not stop silence compression, and more. Figure 14 illustrates how these parameters work. See the
appropriate Configuration Guide for details of the parameters and information on how to enable
and configure this feature.
Figure 14. Silence Compressed Record Parameters Illustrated
Speech
Detected
SCR_THRES
(dB)
Begin
Compression
End of
Speech
SCR_T
(10 ms Unit)
Compression
Ends
SCR_PC
(Bytes)
Silence Less
Than SCR_T
Compression
Not Enabled
8.8.3Encoding Methods Supported
Noise Spike
(Glitch)
Compression
Continues
Begin
Compression
SCR_PC
(bytes)
SCR_T
(10 ms Unit)
SCR_DG
(10 ms Unit)
On DM3 boards, the following encoding algorithms and sampling rates are supported in silence
compressed record (SCR):
• OKI ADPCM, 6 kHz with 4-bit samples (24 kbps) and 8 kHz with 4-bit samples (32 kbps),
VOX and WAVE file formats
• linear PCM, 8 kHz sampling 64 Kbps (8 bits), 8 kHz sampling 128 Kbps (16 bits), VOX and
WAVE file formats
94Voice API Programming Guide — June 2005
Recording and Playback
• G.711 PCM, 6 kHz with 8-bit samples (48 kbps) and 8 kHz with 8-bit samples (64 kbps) using
A-law or mu-law coding, VOX and WAVE file formats
• G.721 at 8 kHz with 4-bit samples (32 kbps), VOX and WAVE file formats
• G.726 bit-exact voice coder at 8 kHz with 2-, 3-, 4-, or 5-bit samples (16, 24, 32, 40 kbps),
VOX and WAVE file formats
On Springware boards, the following encoding algorithms and sampling rates are supported in
SCR:
• 6 kHz and 8 kHz OKI ADPCM
• 8 kHz and 11 kHz linear PCM
• 8 kHz and 11 kHz A-law PCM
• 8 kHz and 11 kHz mu-law PCM
8.9Recording with the Voice Activity Detector
Recording with the voice activity detector is discussed in the following topics:
• Overview
• Enabling
• Encoding Methods Supported
8.9.1Overview
The dx_reciottdata( ) function, used to record voice data, has two modes that work with the voice
activity detector. One mode enables voice activity detection with event notification upon detection.
The second mode adds initial silence compression on the line before voice energy is detected; if
initial silence is greater than the default allowable amount of silence, the amount in excess is
eliminated. This mode uses the same algorithm as the silence compressed record (SCR) feature
described in Section 8.8, “Silence Compressed Record”, on page 93.
The voice activity detector (VAD) is a component in the voice software that examines the incoming
signal and determines if the signal contains significant energy and is likely to be voice.
The recording modes for voice activity detection are supported on select DM3 boards only and
with certain encoding methods only. For more information about boards supported and the features
supported on each board, see the Release Guide for your system release. For more information
about encoding methods supported, see Section 8.8.3, “Encoding Methods Supported”, on page 94.
Voice API Programming Guide — June 200595
Recording and Playback
8.9.2Enabling
The modes related to the voice activity detector are specified in the mode parameter of the
dx_reciottdata( ) function. They are:
RM_VADNOTIFY
generates an event, TDX_VAD, on detection of voice energy during the recording operation
Note: TDX_VAD does not indicate function termination; it is an unsolicited event. Do not
RM_ISCR
adds initial silence compression to the VAD capability. Initial silence here refers to the amount
of silence on the line before voice activity is detected. When using RM_ISCR, the default
value for the amount of initial silence allowable is 3 seconds. Any initial silence longer than
that will be eliminated to the default allowable amount. This default value can be changed by
modifying a parameter in the .config file for the board and then generating a new .fcd file. The
0x416 parameter must be added in the [encoder] section of the .config file. For details on using
this parameter, see the DM3 Configuration Guide.
Note: The RM_ISCR mode can only be used in conjunction with RM_VADNOTIFY.
confuse this event with the TEC_VAD event which is used in the continuous speech
processing (CSP) library.
When these two modes are used together, no data is recorded as output until voice activity is
detected on the line. The TDX_VAD event indicates the initiation of voice. The output file will be
empty before voice activity is detected, although some initial silence may be included as specified
in the .fcd file.
To enable these modes, OR them to the mode parameter. For example:
Note: The dx_reciottdata( ) function does not perform echo-cancelled streaming. For automatic speech
recognition applications, use record or streaming functions in the continuous speech processing
(CSP) API library. For more information, see the Continuous Speech Processing API Programming Guide and Continuous Speech Processing API Programming Guide.
8.9.3Encoding Methods Supported
The following encoding algorithms and sampling rates are supported for recording with the voice
activity detector:
• OKI ADPCM, 6 kHz with 4-bit samples (24 kbps) and 8 kHz with 4-bit samples (32 kbps),
VOX and WAVE file formats
• linear PCM, 8 kHz sampling 64 Kbps (8 bits), 8 kHz sampling 128 Kbps (16 bits), VOX and
WAVE file formats
• G.711 PCM, 6 kHz with 8-bit samples (48 kbps) and 8 kHz with 8-bit samples (64 kbps) using
A-law or mu-law coding, VOX and WAVE file formats
• G.721 at 8 kHz with 4-bit samples (32 kbps), VOX and WAVE file formats
96Voice API Programming Guide — June 2005
• G.726 bit-exact voice coder at 8 kHz with 2-, 3-, 4-, or 5-bit samples (16, 24, 32, 40 kbps),
VOX and WAVE file formats
8.10Streaming to Board
The streaming to board feature is discussed in the following topics:
• Streaming to Board Overview
• Streaming to Board Functions
• Implementing Streaming to Board
• Streaming to Board Hints and Tips
8.10.1Streaming to Board Overview
The streaming to board feature provides a way to stream data in real time to a network interface.
Unlike the standard voice play feature (store and forward method), data can be streamed with little
delay as the amount of initial data required to start the stream is configurable. The streaming to
board feature is essential for applications such as text-to-speech, distributed prompt servers, and IP
gateways.
Recording and Playback
The streaming to board feature uses a circular stream buffer to hold data, provides configurable
high and low water mark parameters, and generates events when those water marks are reached.
The streaming to board feature is not supported on Springware boards.
8.10.2Streaming to Board Functions
The following functions are used by the streaming to board feature:
dx_OpenStreamBuffer( )
creates and initializes a circular stream buffer
dx_SetWaterMark( )
sets high and low water marks for the circular stream buffer
dx_PutStreamData( )
places data into the circular stream buffer
dx_GetStreamInfo( )
retrieves information about the circular stream buffer
dx_ResetStreamBuffer( )
resets internal data for a circular stream buffer
dx_CloseStreamBuffer( )
deletes a circular stream buffer
For details on these functions, see the Voice API Library Reference.
Voice API Programming Guide — June 200597
Recording and Playback
8.10.3Implementing Streaming to Board
Perform the following steps to implement streaming to board in your application:
Note: These steps do not represent every task that must be performed to create a working application but
are intended as general guidelines for implementing streaming to board.
1. Decide on the size of the circular stream buffer. This value is used as input to the
dx_OpenStreamBuffer( ) function. To determine the circular stream buffer size, see
Section 8.10.4, “Streaming to Board Hints and Tips”, on page 98.
2. Based on the circular stream buffer and the bulk queue buffer size, decide on values for the
high and low water marks for the circular stream buffer. To determine high and low water mark
values, see Section 8.10.4, “Streaming to Board Hints and Tips”, on page 98.
3. Initialize and create a circular stream buffer using dx_OpenStreamBuffer( ).
4. Set the high and low water marks using dx_SetWaterMark( ).
5. Start the play using dx_playiottdata( ) or dx_play( ) in asynchronous mode with the io_type
field in DX_IOTT data structure set to IO_STREAM.
6. Put data in the circular stream buffer using dx_PutStreamData( ).
7. Wait for events.
The TDX_LOWWATER event is generated every time data in the buffer falls below the low
water mark. The TDX_HIGHWATER event is generated every time data in the buffer is above
the high water mark. The application receives TDX_LOWWATER and TDX_HIGHWATER
events regardless of whether or not dx_SetWaterMark( ) is used in your application. These
events are generated when there is a play operation with this buffer and are reported on the
device that is performing the play. If there is no active play, the application will not receive any
of these events.
TDX_PLAY indicates that play has completed.
8. When all files are played, issue dx_CloseStreamBuffer( ).
8.10.4Streaming to Board Hints and Tips
Consider the following usage guidelines when implementing streaming to board in your
application:
• You can create as many circular stream buffers as needed on a channel; however, you are
limited by the amount of memory on the system. You can use more than one circular stream
buffer per play via the DX_IOTT structure. In this case, specify that the data ends in one buffer
using the STREAM_EOD flag so that the play can process the next DX_IOTT structure in the
chain.
• In general, the larger you define the circular stream buffer size, the better. Factors to take into
consideration include the average input file size, the amount of memory on your system, the
total number of channels in your system, and so on. Having an optimal circular stream buffer
size results in the high and low water marks being reached less often. In a well-tuned system,
the high and low water marks should rarely be reached.
• When adjusting circular stream buffer sizes, be aware that you must also adjust the high and
low water marks accordingly.
98Voice API Programming Guide — June 2005
Recording and Playback
• Recommendation for the high water mark: it should be based on the following:
(size of the circular stream buffer) minus (two times the size of the bulk queue buffer)
For example, if the circular stream buffer is 100 kbytes, and the bulk queue buffer size is
8 kbytes, set the high water mark to 84 kbytes. (The bulk queue buffer size is set through the
dx_setchxfercnt( ) function.)
• Recommendation for the low water mark:
– If the bulk queue buffer size is less than 8 kbytes, the low water mark should be four times
the size of the bulk queue buffer size.
– If the bulk queue buffer size is greater than 8 kbytes and less than 16 kbytes, the low water
mark should be three times the size of the bulk queue buffer size.
– If the bulk queue buffer size is greater than 16 kbytes, the low water mark should be two
times the size of the bulk queue buffer size.
• When a TDX_LOWWATER event is received, continue putting data in the circular stream
buffer. Remember to set STREAM_EOD flag to EOD on the last piece of data.
• When a TDX_HIGHWATER event is received, stop putting data in the circular stream buffer.
If using a text-to-speech (TTS) engine, you will have to stop the engine from sending more
data. If you cannot control the output of the TTS engine, you will need to control the input to
the engine.
• It is recommended that you enable the TDX_UNDERRUN event to notify the application of
firmware underrun conditions on the board. Specify DM_UNDERRUN in dx_setevtmsk( ).
8.11Pause and Resume Play
The voice library provides functionality for pausing a playback and resuming a playback. This
functionality is discussed in the following topics:
• Pause and Resume Play Overview
• Pause and Resume Play Functions
• Implementing Pause and Resume Play
• Pause and Resume Play Hints and Tips
8.11.1Pause and Resume Play Overview
The pause and resume play functionality enables you to pause a play that is currently in progress
and later resume the same play. The play is resumed at the exact point it was stopped without loss
of data.
The pause and resume play functionality works using one of the following methods:
• using a pre-defined DTMF digit, set up similarly to speed and volume control in the
DX_SVCB data structure.
• programmatically using the dx_pause( ) and dx_resume( ) functions.
All voice encoding methods available in the voice library are supported for this feature. There are
no restrictions.
Voice API Programming Guide — June 200599
Recording and Playback
The pause and resume play feature is not supported on Springware boards.
8.11.2Pause and Resume Play Functions
The following functions and data structure are used in the pause and resume play feature:
dx_pause( )
pauses a play currently in progress until a subsequent dx_resume( ) is issued
dx_resume( )
resumes the play that was paused using dx_pause( )
dx_setsvcond( )
sets adjustment condition for the play (in this case, a DTMF digit to pause/resume play)
DX_SVCB
data structure used by dx_setsvcond( ) to specify adjustment conditions for the play
Use these functions and data structure in conjunction with play functions, such as
dx_playiottdata( ) play function.
8.11.3Implementing Pause and Resume Play
Follow these steps to implement pause and resume play in your application:
Note: These steps do not represent every task that must be performed to create a working application but
are intended as general guidelines for implementing pause and resume play.
1. Decide on whether to set DTMF digits to control the pause and resume play functionality. If
yes, set up the condition in the DX_SVCB data structure and call dx_setsvcond( ).
2. Set up the DX_IOTT data structure for the play operation.
3. Set up the DV_TPT data structure to specify termination conditions for the play.
4. Perform play operation on the channel; for example, use dx_playiottdata( ).
5. If you answered no to step 1, perform pause operation on the channel using dx_pause( ).
6. If you answered no to step 1, perform resume operation on the channel using dx_resume( ).
8.11.4Pause and Resume Play Hints and Tips
Consider the following hints and tips when implementing pause play and resume play in your
application:
• If a DTMF digit is set as a termination condition, play is terminated when this condition is met,
even if a play is currently paused. That is, the termination condition takes precedence over the
pause/resume condition.
For example, let’s say you set the digit 2 as a termination condition on a play. If you press this
digit during play or while the play is paused, the play will be terminated. The play will
terminate when the DTMF termination condition is met. If play is paused, it does not wait for
the play to resume. As another example, if you set 5 seconds as the termination condition on a
play, the play will terminate after 5 seconds. The timer runs regardless of the paused condition.
100Voice API Programming Guide — June 2005
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.