INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY
ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN
INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS
ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES
RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER
INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications.
Intel may make changes to specifications and product descriptions at any time, without notice.
This Voice API Programming Guide as well as the software described in it is furnished under license and may only be used or copied in accordance
with the terms of the license. The information in this manual is furnished for informational use only, is subject to change without notice, and should not
be construed as a commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may
appear in this document or any software that may be provided in association with this document.
Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any
means without express written consent of Intel Corporation.
This revision history summarizes the changes made in each published version of this document.
Document No.Publication DateDescription of Revisions
05-2377-002June 2005Application Development Guidelines chapter : Added bullet about digits not always
being cleared by dx_clrdigbuf( ) in Tone Detection Considerations section [PTR
33806].
Call Progress Analysis chapter : Added eight new SIT sequences that can be
returned by ATDX_CRTNID( ) for DM3 boards in Types of Tones section.
Revised values of TID_SIT_NC (Freq of first segment changed from 950/1001 to
950/1020) and TID_SIT_VC (Freq of first segment changed from 950/1001 to
950/1020) in table of Special Information Tone Sequences (DM3); also added
four new SIT sequences to this table. Added note about SRL device mapper
functions in Steps to Modify a Tone Definition on DM3 Boards section.
Recording and Playback chapter : Added Recording with the Voice Activity Detector
section that describes new modes for dx_reciottdata( ).
Send and Receive FSK Data chapter: Updated Fixed-Line Short Message Service
(SMS) section to indicate that fixed-line short message service (SMS) is
supported on Springware boards. Updated Library Support on Springware
Boards section to indicate that Springware boards in Linux support ADSI two-
way FSK and SMS.
Cached Prompt Management chapter: Added sentence to second paragraph about
flushing cached prompts in Overview of Cached Prompt Management section.
Added second paragraph about flushing cached prompts in Cached Prompt
Management Hints and Tips section.
05-2377-001November 2004Initial version of document. Much of the information contained in this document was
previously published in the Voice API for Linux Operating System Programming
Guide (document number 05-1829-001) and the Voice API for Windows Operating
Systems Programming Guide (document number 05-1831-002).
This document now supports both Linux and Windows operating systems. When
information is specific to an operating system, it is noted.
Voice API Programming Guide — June 200511
Revision History
12Voice API Programming Guide — June 2005
About This Publication
The following topics provide information about this publication:
• Purpose
• Applicability
• Intended Audience
• How to Use This Publication
• Related Information
Purpose
This publication provides guidelines for building computer telephony applications on Windows*
and Linux* operating systems using the Intel
limited to, call routing, voice messaging, interactive voice response, and call center applications.
This publication is a companion guide to the Voice API Library Reference, which provides details
on the functions and parameters in the voice library.
®
voice API. Such applications include, but are not
Applicability
This document version (05-2377-002) is published for Intel® Dialogic® System Release 6.1 for
Linux operating system.
This document may also be applicable to later Intel Dialogic system releases, including service
updates, on Linux or Windows. Check the Release Guide for your software release to determine
whether this document is supported.
This document is applicable to Intel Dialogic system releases only. It is not applicable to Intel
NetStructure
documentation specific to HMP is provided. Check the Release Guide for your software release to
determine what documents are provided with the release.
®
Host Media Processing (HMP) software releases. A separate set of voice API
Intended Audience
This information is intended for:
• Distributors
• System Integrators
• Toolkit Developers
Voice API Programming Guide — June 200513
About This Publication
• Independent Software Vendors (ISVs)
• Value Added Resellers (VARs)
• Original Equipment Manufacturers (OEMs)
How to Use This Publication
This document assumes that you are familiar with and have prior experience with Windows or
Linux operating systems and the C programming language. Use this document together with the
following: the Voice API Library Reference, the Standard Runtime Library API Programming Guide, and the Standard Runtime Library API Library Reference.
The information in this guide is organized as follows:
• Chapter 1, “Product Description” introduces the key features of the voice library and provides
a brief description of each feature.
• Chapter 2, “Programming Models” provides a brief overview of supported programming
models.
• Chapter 3, “Device Handling” discusses topics related to devices such as device naming
concepts, how to open and close devices, and how to discover whether a device is Springware
or DM3.
• Chapter 4, “Event Handling” provides information on functions used to handle events.
• Chapter 5, “Error Handling” provides information on handling errors in your application.
• Chapter 6, “Application Development Guidelines” provides programming guidelines and
techniques for developing an application using the voice library. This chapter also discusses
fixed and flexible routing configurations.
• Chapter 7, “Call Progress Analysis” describes the components of call progress analysis in
detail. This chapter also covers differences between Basic Call Progress Analysis and
PerfectCall Call Progress Analysis.
• Chapter 8, “Recording and Playback” discusses playback and recording features, such as
encoding algorithms, play and record API functions, transaction record, and silence
compressed record.
• Chapter 9, “Speed and Volume Control” explains how to control speed and volume of
playback recordings through API functions and data structures.
• Chapter 10, “Send and Receive FSK Data” describes the two-way frequency shift keying
(FSK) feature, the Analog Display Services Interface (ADSI), and API functions for use with
this feature.
• Chapter 11, “Caller ID” describes the caller ID feature, supported formats, and how to enable
it.
• Chapter 12, “Cached Prompt Management” provides information on cached prompts and how
to use cached prompt management in your application.
• Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation”
describes these tone detection and generation features in detail.
• Chapter 14, “Global Dial Pulse Detection” discusses the Global DPD feature, the API
functions for use with this feature, programming guidelines, and example code.
14Voice API Programming Guide — June 2005
About This Publication
• Chapter 15, “R2/MF Signaling” describes the R2/MF signaling protocol, the API functions for
use with this feature, and programming guidelines.
that include a license for the Syntellect Technology Corporation (STC) patent portfolio.
• Chapter 17, “Building Applications” discusses compiling and linking requirements such as
include files and library files.
Related Information
See the following for more information:
• For details on all voice functions, parameters and data structures in the voice library, see the
Voice API Library Reference.
• For details on the Standard Runtime Library (SRL), supported programming models, and
programming guidelines for building all applications, see the Standard Runtime Library API
Programming Guide. The SRL is a device-independent library that consists of event
management functions and standard attribute functions.
• For details on all functions and data structures in the Standard Runtime Library (SRL) library,
see the Standard Runtime Library API Library Reference.
• For information on the system release, system requirements, software and hardware features,
supported hardware, and release documentation, see the Release Guide for the system release
you are using.
• For details on compatibility issues, restrictions and limitations, known problems, and late-
breaking updates or corrections to the release documentation, see the Release Update.
Be sure to check the Release Update for the system release you are using for any updates or
corrections to this publication. Release Updates are available on the Telecom Support
Resources website at http://resource.intel.com/telecom/support/releases/.
• For details on installing the system software, see the System Release Installation Guide.
• For guidelines on building applications using Global Call software (a common signaling
interface for network-enabled applications, regardless of the signaling protocol needed to
connect to the local telephone network), see the Global Call API Programming Guide.
• For details on all functions and data structures in the Global Call library, see the Global Call
API Library Reference.
• For details on configuration files (including FCD/PCD files) and instructions for configuring
products, see the Configuration Guide for your product or product family.
Voice API Programming Guide — June 200515
About This Publication
16Voice API Programming Guide — June 2005
1.Product Description
This chapter provides information on key voice library features and capability. The following
topics are covered:
The voice software provides a high-level interface to Intel telecom media processing boards and is
a building block for creating computer telephony applications. It offers a comprehensive set of
features such as dual-tone multifrequency (DTMF) detection, tone signaling, call progress analysis,
playing and recording that supports a number of encoding methods, and much more.
The voice software consists of a C language library of functions, device drivers, and firmware.
The voice library is well integrated with other technology libraries provided by Intel such as fax,
conferencing, and continuous speech processing. This architecture enables you to add new
capability to your voice application over time.
For a list of voice features by product, see the Release Guide for your system release.
1.2R4 API
The term R4 API (“System Software Release 4 Application Programming Interface”) describes the
direct interface used for creating computer telephony application programs. The R4 API is a rich
set of proprietary APIs for building computer telephony applications on Intel telecom products.
These APIs encompass technologies that include voice, conferencing, fax, and speech. This
document describes the voice API.
Voice API Programming Guide — June 200517
Product Description
In addition to original Springware products (also known as earlier-generation products), the R4
API supports a new generation of hardware products that are based on the DM3 mediastream
architecture. Feature differences between these two categories of products are noted.
DM3 boards is a collective name used in this document to refer to products that are based on the
DM3 mediastream architecture. DM3 board names typically are prefaced with “DM,” such as the
Intel NetStructure
architecture. Springware boards typically are prefaced with “D,” such as the Intel
D/240JCT-T1.
In this document, the term voice API is used to refer to the R4 voice API.
®
DM/V2400A. Springware boards refer to boards based on earlier-generation
1.3Call Progress Analysis
Call progress analysis monitors the progress of an outbound call after it is dialed into the Public
Switched Telephone Network (PSTN).
There are two forms of call progress analysis: basic and PerfectCall. PerfectCall call progress
analysis uses an improved method of signal identification and can detect fax machines and
answering machines. Basic call progress analysis provides backward compatibility for older
applications written before PerfectCall call progress analysis became available.
®
Dialogic®
Note: PerfectCall call progress analysis was formerly called enhanced call analysis.
See Chapter 7, “Call Progress Analysis” for detailed information about this feature.
1.4Tone Generation and Detection Features
In addition to DTMF and MF tone detection and generation, the following signaling features are
provided by the voice library:
• Global Tone Detection (GTD)
• Global Tone Generation (GTG)
• Cadenced Tone Generation
1.4.1Global Tone Detection (GTD)
Global tone detection allows you to define single- or dual-frequency tones for detection on a
channel-by-channel basis. Global tone detection and GTD tones are also known as user-defined tone detection and user-defined tones.
Use global tone detection to detect single- or dual-frequency tones outside the standard DTMF
range of 0-9, a-d, *, and #. The characteristics of a tone can be defined and tone detection can be
enabled using GTD functions and data structures provided in the voice library.
18Voice API Programming Guide — June 2005
See Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation” for
detailed information about global tone detection.
1.4.2Global Tone Generation (GTG)
Global tone generation allows you to define a single- or dual-frequency tone in a tone generation
template and to play the tone on a specified channel.
See Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation” for
detailed information about global tone generation.
1.4.3Cadenced Tone Generation
Cadenced tone generation is an enhancement to global tone generation. It allows you to generate a
tone with up to 4 single- or dual-tone elements, each with its own on/off duration, which creates the
signal pattern or cadence. You can define your own custom cadenced tone or take advantage of the
built-in set of standard PBX call progress signals, such as dial tone, ringback, and busy.
See Chapter 13, “Global Tone Detection and Generation, and Cadenced Tone Generation” for
detailed information about cadenced tone generation.
Product Description
1.5Dial Pulse Detection
Dial pulse detection (DPD) allows applications to detect dial pulses from rotary or pulse phones by
detecting the audible clicks produced when a number is dialed, and to use these clicks as if they
were DTMF digits. Global dial pulse detection, called global DPD, is a software-based dial pulse
detection method that can use country-customized parameters for extremely accurate performance.
See Chapter 14, “Global Dial Pulse Detection” for more information about this feature.
1.6Play and Record Features
The following play and record features are provided by the voice library:
• Play and Record Functions
• Speed and Volume Control
• Transaction Record
• Silence Compressed Record
• Streaming to Board
• Echo Cancellation Resource
Voice API Programming Guide — June 200519
Product Description
1.6.1Play and Record Functions
The voice library includes several functions and data structures for recording and playing audio
data. These allow you to digitize and store human voice; then retrieve, convert, and play this digital
information. In addition, you can pause a play currently in progress and resume that same play.
For more information about play and record features, see Chapter 8, “Recording and Playback”.
This chapter also includes information about voice encoding methods supported; see Section 8.5,
“Voice Encoding Methods”, on page 89. For detailed information about play and record functions,
see the Voice API Library Reference.
1.6.2Speed and Volume Control
The speed and volume control feature allows you to control the speed and volume of a message
being played on a channel, for example, by entering a DTMF tone.
Se Chapter 9, “Speed and Volume Control” for more information about this feature.
1.6.3Transaction Record
The transaction record feature allows voice activity on two channels to be summed and stored in a
single file, or in a combination of files, devices, and memory. This feature is useful in call center
applications where it is necessary to archive a verbal transaction or record a live conversation.
See Chapter 8, “Recording and Playback” for more information on the transaction record feature.
1.6.4Silence Compressed Record
The silence compressed record (SCR) feature enables recording with silent pauses eliminated. This
results in smaller recorded files with no loss of intelligibility.
When the audio level is at or falls below the silence threshold for a minimum duration of time,
silence compressed record begins. If a short burst of noise (glitch) is detected, the compression
does not end unless the glitch is longer than a specified period of time.
See Chapter 8, “Recording and Playback” for more information.
1.6.5Streaming to Board
The streaming to board feature allows you to stream data to a network interface in real time. Unlike
the standard voice play feature (store and forward), data can be streamed in real time with little
delay as the amount of initial data required to start the stream is configurable. The streaming to
board feature is essential for applications such as text-to-speech, distributed prompt servers, and IP
gateways.
For more information about this feature, see Chapter 8, “Recording and Playback”.
20Voice API Programming Guide — June 2005
1.6.6Echo Cancellation Resource
The echo cancellation resource (ECR) feature enables a voice channel to dynamically perform echo
cancellation on any external TDM bus time slot signal.
Note: The ECR feature has been replaced with continuous speech processing (CSP). Although the CSP
API is related to the voice API, it is provided as a separate product. The continuous speech
processing software is a significant enhancement to ECR. The continuous speech processing
library provides many features such as high-performance echo cancellation, voice energy detection,
barge-in, voice event signaling, pre-speech buffering, full-duplex operation and more. For more
information on this API, see the Continuous Speech Processing documentation.
See Chapter 8, “Recording and Playback” for more information about the ECR feature.
1.7Send and Receive FSK Data
The send and receive frequency shift keying (FSK) data interface is used for Analog Display
Services Interface (ADSI) and fixed-line short message service, also called small message service,
or SMS. Frequency shift keying is a frequency modulation technique to send digital data over
voiced band telephone lines. ADSI allows information to be transmitted for display on a displaybased telephone connected to an analog loop start line, and to store and forward SMS messages in
the Public Switched Telephone Network (PSTN). The telephone must be a true ADSI-compliant or
fixed line SMS-compliant device.
Product Description
See Chapter 10, “Send and Receive FSK Data” for more information on ADSI, FSK, and SMS.
1.8Caller ID
An application can enable the caller ID feature on specific channels to process caller ID
information as it is received with an incoming call. Caller ID information can include the calling
party’s directory number (DN), the date and time of the call, and the calling party’s subscriber
name.
See Chapter 11, “Caller ID” for more information about this feature.
1.9R2/MF Signaling
R2/MF signaling is an international signaling system that is used in Europe and Asia to permit the
transmission of numerical and other information relating to the called and calling subscribers’
lines.
R2/MF signaling is typically accomplished through the Global Call API. For more information, see
the Global Call documentation set. Chapter 15, “R2/MF Signaling” is provided for reference only.
Voice API Programming Guide — June 200521
Product Description
1.10TDM Bus Routing
A time division multiplexing (TDM) bus is a technique for transmitting a number of separate
digitized signals simultaneously over a communication medium. TDM bus includes the CT Bus
and SCbus.
The CT Bus is an implementation of the computer telephony bus standard developed by the
Enterprise Computer Telephony Forum (ECTF) and accepted industry-wide. The H.100 hardware
specification covers CT Bus implementation using the PCI form factor. The H.110 hardware
specification covers CT Bus implementation using the CompactPCI (cPCI) form factor. The CT
Bus has 4096 bi-directional time slots.
The SCbus or signal computing bus connects Signal Computing System Architecture (SCSA)
resources. The SCbus has 1024 bi-directional time slots.
A TDM bus connects voice, telephone network interface, fax, and other technology resource
boards together. TDM bus boards are treated as board devices with on-board voice and/or
telephone network interface devices that are identified by a board and channel (time slot for digital
network channels) designation, such as a voice channel, analog channel, or digital channel.
For information on TDM bus routing functions, see the Voice API Library Reference.
Note: When you see a reference to the SCbus or SCbus routing, the information also applies to the CT
Bus on DM3 products. That is, the physical interboard connection can be either SCbus or CT Bus.
The SCbus protocol is used and the TDM routing API (previously called the SCbus routing API)
applies to all the boards regardless of whether they use an SCbus or CT Bus physical interboard
connection.
22Voice API Programming Guide — June 2005
2.Programming Models
This chapter briefly discusses the Standard Runtime Library and supported programming models:
The Standard Runtime Library (SRL) provides a set of common system functions that are device
independent and are applicable to all Intel
event management functions, device management functions (called standard attribute functions),
and device mapper functions. You can use the SRL to simplify application development, such as by
writing common event handlers to be used by all devices.
When developing voice processing applications, refer to the Standard Runtime Library
documentation in tandem with the voice library documentation. For more information on the
Standard Runtime Library, see the Standard Runtime Library API Library Reference and Standard Runtime Library API Programming Guide.
®
telecom devices. The SRL consists of a data structure,
2.2Asynchronous Programming Models
Asynchronous programming enables a single program to control multiple voice channels within a
single process. This allows the development of complex applications where multiple tasks must be
coordinated simultaneously.
The asynchronous programming model uses functions that do not block thread execution; that is,
the function continues processing under the hood. A Standard Runtime Library (SRL) event later
indicates function completion.
Generally, if you are building applications that use any significant density, you should use the
asynchronous programming model to develop field solutions.
For complete information on asynchronous programming models, see the Standard Runtime Library API Programming Guide.
2.3Synchronous Programming Model
The synchronous programming model uses functions that block application execution until the
function completes. This model requires that each channel be controlled from a separate process.
This allows you to assign distinct applications to different channels dynamically in real time.
Voice API Programming Guide — June 200523
Programming Models
Synchronous programming models allow you to scale an application by simply instantiating more
threads or processes (one per channel). This programming model may be easy to encode and
manage but it relies on the system to manage scalability. Applying the synchronous programming
model can consume large amounts of system overhead, which reduces the achievable densities and
negatively impacts timely servicing of both hardware and software interrupts. Using this model, a
developer can only solve system performance issues by adding memory or increasing CPU speed
or both. The synchronous programming models may be useful for testing or very low-density
solutions.
For complete information on synchronous programming models, see the Standard Runtime Library API Programming Guide.
24Voice API Programming Guide — June 2005
3.Device Handling
This chapter describes the concept of a voice device and how voice devices are named and used.
The following concepts are key to understanding devices and device handling:
device
A device is a computer component controlled through a software device driver. A resource
board, such as a voice resource, fax resource, and conferencing resource, and network
interface board, contains one or more logical board devices. Each channel or time slot on the
board is also considered a device.
device channel
A device channel refers to a data path that processes one incoming or outgoing call at a time
(equivalent to the terminal equipment terminating a phone line). The first two numbers in the
product naming scheme identify the number of device channels for a given product. For
example, there are 24 voice device channels on a D/240JCT-T1 board, 30 on a D/300JCT-E1.
3
device name
A device name is a literal reference to a device, used to gain access to the device via an
xx_open( ) function, where “xx” is the prefix defining the device to be opened. For example,
“dx” is the prefix for voice device and “fx” for fax device.
device handle
A device handle is a numerical reference to a device, obtained when a device is opened using
xx_open( ), where “xx” is the prefix defining the device to be opened. The device handle is
used for all operations on that device.
physical and virtual boards
The API functions distinguish between physical boards and virtual boards. The device driver
views a single physical voice board with more than four channels as multiple emulated D/4x
boards. These emulated boards are called virtual boards. For example, a D/120JCT-LS with 12
channels of voice processing contains three virtual boards. A DM/V480A-2T1 board with 48
channels of voice processing and two T1 trunk lines contains 12 virtual voice boards and two
virtual network interface boards.
3.2Voice Device Names
The software assigns a device name to each device or each component on a board. A voice device is
named dxxxBn, where n is the device number assigned in sequential order down the list of sorted
voice boards. A device corresponds to a grouping of two or four voice channels.
Voice API Programming Guide — June 200525
Device Handling
For example, a D/240JCT-T1 board employs 24 voice channels; the software therefore divides the
D/240JCT into six voice board devices, each device consisting of four channels. Examples of board
device names for voice boards are dxxxB1 and dxxxB2.
A device name can be appended with a channel or component identifier. A voice channel device is
named dxxxBnCy, where y corresponds to one of the voice channels. Examples of channel device
names for voice boards are dxxxB1C1 and dxxxB1C2.
A physical board device handle is a numerical reference to a physical board. A physical board
device handle is a concept introduced in System Release 6.0. Previously there was no way to
identify a physical board but only the virtual boards that make up the physical board. Having a
physical board device handle enables API functions to act on all devices on the physical board. The
physical board device handle is named brdBn, where n is the device number. As an example, the
physical board device handle is used in cached prompt management.
Use the Standard Runtime Library device mapper functions to retrieve information on all devices in
a system, including a list of physical boards, virtual boards on a physical board, and subdevices on
a virtual board.
For complete information on device handling, see the Standard Runtime Library API Programming Guide.
26Voice API Programming Guide — June 2005
4.Event Handling
This chapter provides information on functions used to retrieve and handle events. Topics include:
An event indicates that a specific activity has occurred on a channel. The voice driver reports
channel activity to the application program in the form of events, which allows the program to
identify and respond to a specific occurrence on a channel. Events provide feedback on the
progress and completion of functions and indicate the occurrence of other channel activities. Voice
library events are defined in the dxxxlib.h header file.
For a list of events that may be returned by the voice software, see the Voice API Library Reference.
4.2Event Management Functions
4
Event management functions are used to retrieve and handle events being sent to the application
from the firmware. These functions are contained in the Standard Runtime Library (SRL) and
defined in srllib.h. The SRL provides a set of common system functions that are device
independent and are applicable to all Intel
management and event handling, see the Standard Runtime Library API Programming Guide.
Event management functions include:
• sr_enbhdlr( )
• sr_dishdlr( )
• sr_getevtdev( )
• sr_getevttype( )
• sr_getevtlen( )
• sr_getevtdatap( )
For details on SRL functions, see the Standard Runtime Library API Library Reference.
The event management functions retrieve and handle voice device termination events for functions
that run in asynchronous mode, such as dx_dial( ) and dx_play( ). For complete function reference
information, see the Voice API Library Reference.
®
telecom devices. For more information on event
Voice API Programming Guide — June 200527
Event Handling
Each of the event management functions applicable to the voice boards are listed in the following
tables. Table 1 lists values that are required by event management functions. Table 2 list values that
are returned for event management functions that are used with voice devices.
Table 1. Voice Device Inputs for Event Management Functions
Event Management
Function
sr_enbhdlr( )
Enable event handler
sr_dishdlr( )
Disable event handler
Voice Device
Input
evt_typeTDX_PLAYdx_play( )
evt_type As aboveAs above
Valid ValueRelated Voice Functions
TDX_PLAYTONEdx_playtone( )
TDX_RECORDdx_rec( )
TDX_GETDIGdx_getdig( )
TDX_DIALdx_dial( )
TDX_CALLPdx_dial( )
TDX_SETHOOKdx_sethook( )
TDX_WINKdx_wink( )
TDX_ERRORAll asynchronous functions
Table 2. Voice Device Returns from Event Management Functions
Event Management
Function
sr_getevtdev( )
Get device handle
sr_getevttype( )
Get event type
sr_getevtlen( )
Get event data length
sr_getevtdatap( )
Get pointer to event data
Return
Description
devicevoice device handle
event typeTDX_PLAYdx_play( )
event lengthsizeof (DX_CST)
event datapointer to DX_CST structure
Returned ValueRelated Voice Functions
TDX_PLAYTONEdx_playtone( )
TDX_RECORDdx_rec( )
TDX_GETDIGdx_getdig( )
TDX_DIALdx_dial( )
TDX_CALLPdx_dial( )
TDX_CSTdx_setevtmsk( )
TDX_SETHOOKdx_sethook( )
TDX_WINKdx_wink( )
TDX_ERRORAll asynchronous functions
28Voice API Programming Guide — June 2005
5.Error Handling
This chapter discusses how to handle errors that can occur when running an application.
All voice library functions return a value to indicate success or failure of the function. A return
value of zero or a non-negative number indicates success. A return value of -1 indicates failure.
If a voice library function fails, call the standard attribute functions ATDV_LASTERR( ) and
ATDV_ERRMSGP( ) to determine the reason for failure. For more information on these
functions, see the Standard Runtime Library API Library Reference.
If an extended attribute function fails, two types of errors can be generated. An extended attribute
function that returns a pointer will produce a pointer to the ASCIIZ string “Unknown device” if it
fails. An extended attribute function that does not return a pointer will produce a value of
AT_FAILURE if it fails. Extended attribute functions for the voice library are prefaced with
“ATDX_”.
Notes: 1. The dx_open( ) and dx_close( ) functions are exceptions to the above error handling rules. On
Linux, if these functions fail, the return code is -1, and the specific error is found in the errno
variable contained in errno.h. On Windows, if these functions fail, the return code is -1. Use
dx_fileerrno( ) to obtain the system error value.
2. If ATDV_LASTERR( ) returns the EDX_SYSTEM error code, an operating system error has
occurred. On Linux, check the global variable errno contained in errno.h. On Windows, use
dx_fileerrno( ) to obtain the system error value.
5
For a list of errors that can be returned by a voice library function, see the Voice API Library
Reference. You can also look up the error codes in the dxxxlib.h file.
Voice API Programming Guide — June 200529
Error Handling
30Voice API Programming Guide — June 2005
Loading...
+ 178 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.