Mathworks PARALLEL COMPUTING TOOLBOX RELEASE NOTES

Page 1
Parallel Computing Toolbox™ Release Notes
Page 2
How to Contact The MathWorks
www.mathworks. comp.soft-sys.matlab Newsgroup www.mathworks.com/contact_TS.html Technical Support
bugs@mathwo doc@mathworks.com Documentation error reports service@mathworks.com Order status, license renewals, passcodes
info@mathwo
com
rks.com
rks.com
Web
Bug reports
Sales, prici
ng, and general information
508-647-7000 (Phone)
508-647-7001 (Fax)
The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098
For contact information about worldwide offices, see the MathWorks Web site.
Parallel Computing Toolbox™ Release Notes
© COPYRIGHT 2006–20 10 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathW orks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through the federal government of the United States. By accepting delivery of the Program or Documentation, the government hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain to and govern theuse,modification,reproduction,release,performance,display,anddisclosureoftheProgramand Documentation by the federal government (or other entity acquiring for or through the federal government) and shall supersede any conflicting contractual terms or conditions. If this License fails to meet the government’s needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Docu mentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See
www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand
names may be trademarks or registered trademarks of their respective holders.
Patents
The MathWorks products are protected by one or more U.S. patents. Please see
www.mathworks.com/patents for more information.
Page 3
Summary by Version ............................... 1
Version 4.3 (R2010a) Parallel Computing Toolbox
Software
Version 4.2 (R2009b) Parallel Computing Toolbox
Software
Version 4.1 (R2009a) Parallel Computing Toolbox
Software
Version 4.0 (R2008b) Parallel Computing Toolbox
Software
Version 3.3 (R2008a) Parallel Computing Toolbox
Software
........................................ 3
........................................ 7
........................................ 13
........................................ 16
........................................ 20
Contents
Version 3.2 (R2007b) Distributed Computing Toolbox
Software
Version 3.1 (R2007a) Distributed Computing Toolbox
Software
Version 3.0 (R2006b) Distributed Computing Toolbox
Software
Compatibility Summary for Parallel Computing
Toolbox Software
........................................ 24
........................................ 27
........................................ 32
................................ 37
iii
Page 4
iv Contents
Page 5
SummarybyVersion
This table provides quick access to what is new in each version. For clarification, see “Using Release Notes” on page 1 .
Parallel Computing Toolbox™ Release Notes
Version (Release)
Latest Versi V4.3 (R2010a
V4.2 (R2009b)
V4.1 (R2009a)
V4.0 (R2
V3.3 (R2008a)
V3.2 (R2007b)
V3.1 (
V3.0 (R2006b)
008b)
R2007a)
New Features and Changes
on
Yes
)
Details
Yes Details
Yes Details
Yes Details
Yes Details
Yes Detai
Yes Details
Yes Details
Version Compatibilit Consideratio
Yes Summary
Yes Summary
Yes Summary
Yes Summary
Yes Summary
Yes
ls
Summa
Yes Summary
Yes Summary
ry
y
ns
Fixed Bugs and Known Problems
Bug Reports Includes fix
Bug Reports Includes fixes
Bug Repor Includes
Bug Reports Includes fixes
Bug Reports Includes fixes
Bug Rep Inclu
Bug Reports Includes fixes
Bug Reports Includes fixes
es
ts
fixes
orts
des fixes
Related Documentation at Web Site
Printable R elease Notes: PDF
Current product documentation
No
No
No
No
No
No
No
Using Release Notes
Use release notes when upgrading to a newer version to learn about:
New features
Changes
Potential impact on your existing files and practices
1
Page 6
Parallel Computing Toolbox™ Release Notes
Review the release notes for other MathWorks™ products required for this product (for example, MATLAB
®
or Simulink®). Determine if enhancements,
bugs, or compatibility considerations in other products impact you.
If you are upgrading from a software version other than the m ost recent one, review the current release notes and all interim versions. For example, when you upg rade from V1.0 to V1.2, review the release notes for V1.1 and V1.2.
What Is in the Release Notes
New Features and Changes
New functionality
Changes to existing functionality
Version Compatibility Con si derations
When a new feature or change introduces a reported incompatibility between versions, the Compatibility Considerations subsection explains the impact.
Compatibility issues reported after the product release appear under Bug Reports at The MathWorks™ Web site. Bug fixes can sometimes result in incompatibilities, so review the fixed bugs in Bug Reports for any compatibility impact.
Fixed Bugs and Known Problems
The MathWorks offers a user-searchable Bug Reports database so you can view Bug Reports. The development team updates this database at release time and as more information becomes available. Bug Reports include provisions for any known workarounds or file replacem ents. Information is available for bugs existing in or fixed in Release 14SP2 or later. Information is not avail able for all bugs in earlier releases.
Access Bug Reports using y our MathWorks Account.
2
Page 7
Version 4.3 (R2010a) Parallel Computing Toolbox™ Software
Version 4.3 (R2010a) Parallel Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“New Save and Load Abilities for Distributed Arrays” on page 3
“Enhanced Functions for Distributed Arrays” on page 4
“Importing Configurations Programmatically” on page 4
“Enhanced 2-D Block-Cyclic Array Distribution” on page 4
“New Remote Startup of mdce Process” on page 4
“Obtaining mdce Process Version” on page 4
“Demo Updates” on page 5
arizes what is new in Version 4.3 (R2010a):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
Printable Re Notes: PDF
Current pro documentat
lease
duct
ion
“taskFinish File for MATLAB Pool” on page 5
“Upgrade Parallel Computing Products Together” on page 5
New Save and Load Abilities for Distributed Arrays
You now have the ability to save distributed arrays from the client to a single MAT-file. Subsequen tly, in the client y ou can load a distributed array from that file and have it automatically distributed to the MATLAB pool workers. Thepoolsizeanddistributionschemeofthearraydonothavetobethesame when you load the array as they were when you saved it.
3
Page 8
Parallel Computing Toolbox™ Release Notes
You also can now l oa d data directly into distributed arrays, even if the originally saved arrays were not distributed.
For more information, see the
dsave and dload reference pages.
Enhanced Functions for Distributed Arrays
The svd function now supports single-precision and complex data in distributed arrays. Other functions enhanced to support single-precision distributed arrays are
In addition to their original support for 1-D distribution by columns, the enhanced by rows or when the distribution dimension is greater than 2, and 2-D block-cyclic (
tril and triu functions now support arrays with 1-D distribution
'2dbc') distributed arrays.
chol, lu, mldivide,andeig.
Importing Configurations Programmatically
A new function allows you to programmatically import parallel configurations. For m ore information, see the
importParallelConfig reference page.
Enhanced 2-D Block-Cyclic Array Distribution
2-D block-cyclic ('2dbc') array distribution now supports column orientation of the lab grid. The for its orientat ion argument, which is reflected in the codistributor object’s
Orientation property. For information on '2dbc' distribution and using lab
grids, see “2-Dimensional Distribution”.
codistributor2dbc function now accepts the value 'col'
New Remote Startup of mdce Process
New command-line functionality allows you to remotely start up MATLAB Distributed Computing Server proc esses on cluster machines from the desktop computer. For more information, see the
remotemdce reference page.
Obtaining mdce Process Version
An enhancement to the mdc e command lets you get the command version by executing
4
Page 9
Version 4.3 (R2010a) Parallel Computing Toolbox™ Software
mdce -version
For more information on this command, see the mdce reference page.
Demo Updates
Product demos ar e available in the Demos node under Parallel Computing
Toolbox
Benchmarking A\b
This new demo benchmarks Parallel Computing Toolbox performance with the
BER Performance of Equalizer Types
Demos of BER Performance of Several Equalizer Types have been removed from Parallel Computing Toolbox, because Communications Toolbox now incorporates support for Parallel Computing Toolbox. See “New Demos” in the Communications Toolbox release notes.
in the help broswer.
mldivide function.
taskFinish File for MATLAB Pool
The taskFinish file (taskFinish.m) for a MATLAB pool now executes when the pool closes.
Compatibility Considerations
In previous versions of the software, the taskFinish file ex ecuted when a MATLAB pool opened. Beginning with this release, it runs when the pool closes.
Upgrade Parallel Computing Products Together
This version of Parallel Computing Toolbox™ software is accompanied by a corresponding new version of MATLAB software.
®
Distributed Computing Server™
5
Page 10
Parallel Computing Toolbox™ Release Notes
Compatibility Considerations
As with every new release, you must upgrade both Parallel Computing Toolbox and MATLAB Distributed Computing Server products together. These products must be the same version to interact properly with each other.
Jobs created in one version of Parallel Computing Toolbox software will not run in a different version of MATLAB Distributed Computing Server software, and might not be readable in different versions of the toolbox software.
6
Page 11
Version 4.2 (R2009b) Parallel Computing Toolbox™ Software
Version 4.2 (R2009b) Parallel Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“New Distributed Arrays” on page 7
“Renamed codistributor Functions” on page 8
“Enhanceme nts to Admin Center” o n page 10
“Adding or Updating File Dependencies in an Open MATLAB Pool” on
page 10
“Updated globalIndices Function” on page 10
“Support for Job Templates and Description Fil es with HPC Server 2008”
on page 11
arizes what is new in Version 4.2 (R2009b):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
No
“HPC Challenge Benchmarks” on page 11
“pctconfig Enhanced to Support Rang e of Ports” on page 11
“Random Number G enerator on Client Versus Workers” on page 12
New Distributed Arrays
A new form of distributed arrays provides direct access from the client to data stored on the workers in a MATLAB pool. Distributed arrays have the same appearance and rules of indexing as regular arrays.
7
Page 12
Parallel Computing Toolbox™ Release Notes
You can distribute an existing array from the client workspace with the command
D = distributed(X)
where X is an array in the client, and D is a distributed array with its data on the workers in the MATLAB pool. Distributing an array is performed outside an
spmd statement, but a MATLAB pool must be open.
Codistributed arrays that you create on the workers within are accessible on the client as distributed arrays.
The following new functions and methods s upport distributed arrays.
Function Name Description
distributed
Distribute existing array from client workspace to workers
distributed.rand, distributed.ones,etc.
Create distributed array consistent with indicated method, constructing on workers only
gather
Transfer data from MATLAB pool workers to client
isdistributed
C(x,y)
True for distribu t ed array
Indexing into distributed array C on client to access data stored as codistributed arrays on workers
Renamed codistributor Functions
As part of the general enhancements for distributed arrays, several changes to the codistributed interface appear in this release.
spmd statements
Compatibility Considerations
The following table summarizes the changesinfunctionnamesrelatingto codistributed arrays.
8
Page 13
Version 4.2 (R2009b) Parallel Computing Toolbox™ Software
Old Function Name
codcolon codistributed.colon
codistributed(..., 'convert') codistributed(...)
codistributed(...) without 'convert'
New Function Name
codistributed.build
option
codistributed(L, D) using distribution
scheme of
codistributor('1d', ...)
codistributor('2d', ...) codistributor('2dbc', ...) or
codistributor(arrayname) getCodistributor
defaultLabGrid codistributor2dbc.defaultLabGrid
defaultPartition codistributor1d.defaultPartition
isa(X, 'codistributed')
localPart getLocalPart
redistribute(D) using default distribution
D to define that of L
codistributed.build(L, getCodistributor(D))
Still available, but can also use
codistributor1d
codistributor2dbc
Still available, but can also use
iscodistributed(X)
redistribute(D, codistributor())
scheme
redistribute(D1, D2) using distribution
scheme of
D2 to define that of D1
redistribute(D1, getCodistributor(D2))
Some object methods are now properties:
OldMethodName
blockSize(codistObj) codistObj.BlockSize
defaultBlockSize codistributor2dbc.defaultBlockSize
distributionDimension(codistObj) codistObj.Dimension
distributionPartition(codistObj) codistObj.Partition
labGrid(codistObj) codistObj.LabGrid
New Property Name
9
Page 14
Parallel Computing Toolbox™ Release Notes
Enhancements to
Admin Center has located menu cho items, etc.
several small enhancements, including more conveniently
ices, m odified dialog boxes, properties dia log box es for listed
Adding or Upda
Admin Center
ting File Dependencies in an Open
MATLAB Pool
Enhancement dependencie
matlabpool('addfiledependencies', filedepCell) matlabpool updatefiledependencies
where file use when ad pool. The changes t
Updated
The glob distrib must pre
ution as its second argument. Because this argument is required, it
stothe
s in a running MATLAB pool. The new forms of the command are
depCell
ding file dependencies to a job or when you open a MATLAB
pdatefiledependencies
u
oallthelabsinthepool.
globalIndices Function
alIndices
cede the optional argument specifying the lab.
matlabpool command let you add or update file
is a cell array of strings, identical in form to those you
option replicates any file dependency
function now requires that you specify the dimension of
10
Compatibility Considerations
vious toolbox versions, the
In pre argum the di argum
ent before the dimension argument, and both w ere optional. Now
mension argument is required, and it must precede the optional lab
ent.
globalIndices function accepted the lab
Page 15
Version 4.2 (R2009b) Parallel Computing Toolbox™ Software
Support for Job Templates and Description Files with HPC Server 2008
Using job templates and job description files with Windows HPC Server 2008 lets you specify nodes and other scheduler properties for evaluating your jobs. To support these features, the ccssche du ler object has new properties:
ClusterVersion — A string set to 'CCS' or 'HPCServer2008'
JobTemplate — A string set to the name of the job template to use for
all jobs
JobDescriptionFile — A string set to the name of the XML file defining a
base state for job creation
Compatibility Considerations
CCS is now just one of multiple versions of HPC Server. While 'ccs' is still acceptable as a type of scheduler for the use
'hpcserver' for this purpose. In the Configurations Manager, the new
scheduler type is available by selecting File>New>hpcserver(ccs).
findResource function, you can also
HPC Challenge Benchmarks
Several new MATLAB files are availab le to demonstrate HPC Challenge benchmark performance. You can find the files in the folder
matlabroot/toolbox/distcomp/examples/benchmark/hpcchallenge.Each
file is self-documented with explanatory comments.
pctconfig Enhanced to Support Range of Ports
The pctconfig function no w lets you specify a range of ports for the Parallel Computing Toolbox client session to use. This range also includes ports used for a pmode sessio n.
Compatibility Considerations
You now specify the range of pctconfig ports with the 'por tran ge' property; you no longer use the ports of the
'portrange' setting, you no longer use the 'pmodeport' property.
'port' property. As any client pmode session uses tho se
11
Page 16
Parallel Computing Toolbox™ Release Notes
Random Number Ge
nerator on Client Versus
Workers
The random numbe different seed the client hav
Compatibility Considerations
In past relea number strea the workers a of the clien
ses, while all the workers running a job had s eparate random
t.
r generator of the MATLAB workers now use a slightly
from previous releases, so that all the MATLAB workers and
e separate random number streams.
ms, the client had the same stream as one of the workers. Now
ll have unique random number streams different from that
12
Page 17
Version 4.1 (R2009a) Parallel Computing Toolbox™ Software
Version 4.1 (R2009a) Parallel Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“Number of Local Workers Increased to Eight” on page 13
“Admin Center Allows Controlling of Cluster Resources” on page 1 4
“Support for Microsoft Windows HPC Server 2008 (CCS v2)” on page 14
“New Benchmarking Demos” on page 14
“Pre-R2008b Distributed Array Syntax Now Generates Error” on page 15
“LSF Support on Mac OS X 10.5.x” on page 15
arizes what is new in Version 4.1 (R2009a):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
No
Number of Local Workers Increased to Eight
You can now run up to eight local workers on your MATLA B client machine. If you do not specify the number of local workers in a command or configuration, the default number of local workers is determined by the value of the local scheduler’s computational cores on the client machine.
ClusterSize property, which by default is equal to the number of
patibility Considerations
Com
In previous versions, the default number of local workers was four, regardless of the number of cores. If you want to run more local workers than cores (for example, four workers with only one or two cores), you must set the value of
ClusterSize equal to or greater than the number of w orkers you n eed. Then
13
Page 18
Parallel Computing Toolbox™ Release Notes
you can specify the increased number of workers in the appropriate command or configuration, or let your of workers.
Admin Center Allows Controlling of Cluster Resources
When using the MathW orks job manager, the Admin Center GU I now allow s you to start, stop, and otherwise control job managers and MATLAB workers on your cluster nodes. For more information about Admin Center, see “Admin Center” in the MATLAB Distributed Computing Server documentation.
Compatibility Considerations
You can no longer start Admin Center from the MATLAB Desktop Parallel pull-down m enu. You must start Admin Center from outside MATLAB by executing the following:
ClusterSize setting control the default number
matlabroot/toolbox/distcomp/bin/admincenter (on UNIX
systems)
matlabroot\toolbox\distcomp\bin\admincenter.bat (on Microsoft
Windows®operating systems)
®
operating
®
Support for Microsoft Windows HPC Server 2008 (CCS v2)
The parallel computing products now support Microsoft Windows HPC Server 2008 (CCS v2), including service-oriented architecture (SOA) job submissions. There is no change to the programming interface for options, other than the addition of a new CCS scheduler object property,
UseSOAJobSubmission. For implications to the installation of the MATLAB
Distributed Computing Server, see the online installation instructions at
http://www.mathworks.com/distconfig.
ccs
New Benchmarking Demos
New benchmarking demos for Parallel Computing Toolbox can help you understand and evaluate performance of the parallel computing products. You can access these demos in the Help Browser under the
Toolbox
node: expand the nodes for Demos then Benchmarks.
Parallel Computing
14
Page 19
Version 4.1 (R2009a) Parallel Computing Toolbox™ Software
Pre-R2008b Dist Generates Error
In R2008b, distr that release, t Now in R2009a, error. For a su for Codistrib
LSF Support o
For availabi contact Pla
http://www
If Platform Parallel C Server can
ibuted array syntax was updated for codistributed arrays. In
he old form of the syntax still worked, but generated a warning.
the old forms of the syntax no longer work and generate an
mmary of the syntax updates, see “Changed Function Names
uted Arrays” on page 19.
nMacOSX10.5.x
lity of Platform LSF
tform Computing Corporation via their Web site at
.platform.com/Products/platform-lsf/technical-information
Computing does not support LSF on Mac OS X 10.5.x, then
omputing Toolbox and MATLAB Distributed Computing
not support this combination.
ributed Array Syntax Now
®
support on Macintosh®OS X 10.5.x,
.
15
Page 20
Parallel Computing Toolbox™ Release Notes
Version 4.0 (R2008b) Parallel Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“MATLAB
Applications” on page 16
“spmd Construct” on page 17
“Composite Objects” on page 18
“Configuration Validation” on page 18
“Rerunning Failed Tasks” on page 18
“Enhanced Job Control with Generic Scheduler Interface” on page 18
“Changed Function Names for Codistributed Arrays” on page 19
arizes what is new in Version 4.0 (R2008b):
Fixed Bugs an d Known Problems
labeled
lity
ions,
lso
®
Compiler Product Support for Parallel Computing Toolbox
Bug Reports Includes fixes
Related Documentation at Web Site
No
16
“Determining if a MATLAB Pool is Open” on page 19
MATLAB Compiler Product Support for Parallel Computing Toolbox Applications
This release offers the ability to convert Parallel Computing Toolbox applications, using MATLAB libraries that can access MATLAB Distributed Computing Server. For information on this update to MATLAB Compiler, see “Applications Created with Parallel Computing Toolbox Can Be Compiled”.
®
Compiler™, into executables and shared
Page 21
Version 4.0 (R2008b) Parallel Computing Toolbox™ Software
Limitations
MATLAB Compiler does not support configurations that use the local
scheduler or local workers (i.e., workers that run locally on the desktop machine running the MATLAB client session).
Compiled Parallel Computing Toolbox applications do not support Simulink
software. For a list of other unsupported products, see the Web page http://www.mathworks.com/products/ineligible_programs/.
When workers are running a task from compiled code, they can execute
only compiled code and toolbox code. They cannot execute functions contained in the current directory. Batch and MATL AB pool jobs attempt to change the worker working directory to the client working directory. When noncompiled files in the current directory conflict with compiled versions (for example, files with different extensions), an error is thrown.
spmd Construct
A new single program multiple data (spmd) language construct allows enhanced interleaving of serial and parallel programming, with interlab communication.
The general form of an
spmd
<statements>
end
The block of code represented by <statements> executes in parallel o n workers in the MATLAB p oo l. Data on the labs is available for access from the client via Composite objects. For more information, see the page and “Using Distributed Arrays, spmd, and Composites”.
spmd statement is:
spmd reference
Compatibility Considerations
Because spmd is a new keyword, it will conflict with any user-defined functions or variables of the same name. If you have any code with functions or variables named
spmd,youmustrenamethem.
17
Page 22
Parallel Computing Toolbox™ Release Notes
Composite Objec
Composite objec to data that is st assigned insid a MATLAB pool is client using t spmd, and Comp
Configurati
The Configur configurat by clicking “Validatin
Rerunning
When usin failures properti rerun at
Attemp
, it can attempt to rerun up to a specified number of times. New
es of a task object to control reruns and access information about
tempts are:
Maximum
ts provide direct access from the client (desktop) program
ored on labs in the MATLAB pool. The data of variables
e an spmd b lock is av ai la ble via Composites in the cl ie nt. When
open, you can also create Composites directly from the
he
Composite function. See also “Using Distribute d Arrays,
osites”.
on Validation
ations Manager is enhanced with the capability for validating
ions. Open the Configurations Manager on the MATLAB Desktop
Parallel > Manage Configurations. For more information, see
g Configurations”.
Failed Tasks
g a job manager, if a task does not complete due to certain system
NumberOfRetries
tedNumberOfRetries
ts
18
Failed
Enhan Inter
The ge task gene
GetJ
Des
Des
Ca
AttemptInformation
ced Job Control with Generic Scheduler
face
neric scheduler interface now allows you to cancel and destroy jobs and
s and to investigate the state of a job. The following new properties of the
ric scheduler object facilitate these features:
obStateFcn
troyJobFcn
troyTaskFcn
ncelJobFcn
Page 23
Version 4.0 (R2008b) Parallel Computing Toolbox™ Software
CancelTaskFcn
New toolbox functions to accommodate this ability are:
getJobSchedulerData
setJobSchedulerData
For more information on this new functionality, see “Managing Jobs”.
Changed Function Names for Codistributed Arrays
What was kn own in previous releases as distributed arrays are henceforth called codistributed arrays. Some functions related to constructing and accessing codistributed arrays ha ve changed names in this release.
Compatibility Considerations
The following table summarizes the changesinfunctionnamesrelatingto codistributed a rrays. The first three functionsbehaveexactlythesamewith no change in operation, arguments, etc. The
'codistributed' in addition to the array in question.
isa function takes the argument
Old Function Name
distributed codistributed
distributor codistributor
dcolon codcolon
isdistributed isa(X, 'codistributed')
New Function Name
Determining if a MATLAB Pool is Open
The function matlabpool now allows you to discover if a pool of workers is already open. The form of the command is:
matlabpool size
For more information about this option and others, see the matlabpool reference page.
19
Page 24
Parallel Computing Toolbox™ Release Notes
Version 3.3 (R2008a) Parallel Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“Renamed Functions for Product Name Changes” on page 20
“New batch Function” on page 21
“Enhanced Job Creation Functions” on page 21
“Increased Data Size Transfers” on page 21
“Changed Function Names for Distributed Arrays” on page 21
“Support for PBS Pro and TORQUE Schedulers” on page 22
“findResource Now Sets Properties According to Configuration” on page 22
arizes what is new in Version 3.3 (R2008a):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
No
20
“parfor Syntax Has Single Usage” on page 23
“dfeval Now D estroy s Its Job When Finished” on page 23
Renamed Functions for Product Name Changes
As of result of the product name changes, some function names are changing in this release.
patibility Considerations
Com
Two function names are changed to correspond to the new product names:
Page 25
Version 3.3 (R2008a) Parallel Computing Toolbox™ Software
dctconfig has been renamed pctconfig.
dctRunOnAll has been renamed pctRunOnAll.
New batch Function
The new batch function allows you to offload work from the client to one or more workers. The batch submission ca n run scripts that can include jobs that distribute work to other workers. For more information, see the reference page, and “Getting Started” in the Parallel Computing Toolbox User’s Guide.
New Matlabpool Job
The batch functionality is implemented using the new MATLABpool job feature. A MATLAB pool job uses one worker to distribute a job to other workers, thereby freeing the client from the burden of tracking and job’s progress and manipulating data. For more information, see the
createMatlabPoolJob reference page.
batch
Enhanced Job Creation Functions
The createJob and createParallelJo b functions have been enhanced to run without requiring a scheduler object as an argument. This is also true for the new
createMatlabPoolJob function. When a scheduler is not specified, the
function uses the scheduler identified in the applicable parallel configuration. For details, see the reference page for each function.
Increased Data Size Transfers
The default size limitation on data transfers between clients and workers has been significantly increased. In previous releases the default limitation imposed by the JVM memory allocation was approximately 50 MB. The new higher limits are approximately 600 MB on 32-bit systems and 2 GB on 64-bit systems. See “Object Data Size Limitations”.
Changed Fun ction Names for Distributed Arrays
Several functions related to distributed arrays have changed names in this release.
21
Page 26
Parallel Computing Toolbox™ Release Notes
Compatibility Considerations
The following table summarizes the changesinfunctionnamesrelatingto distributed arrays.
Old Function Name
darray distributed, distributor
distribute distributed
dcolonpartition defaultPartition
distribdim distributionDimension
isdarray isdistributed
labgrid labGrid
local localPart
partition distributionPartition
localspan globalIndices
New Function Name
Support for PBS Pro and TORQUE Schedulers
Parallel Computing Toolbox software now fully supports PBS Pro®and TORQUE schedulers. These schedulers are integrated into parallel configurations and scheduler-related functions like
Note If you do not have a shared file system between client and cluster machines, or if you cannot submit jobs directly to the scheduler from the client machine, any use of third-party schedulers for parallel jobs (including
pmode, matlabpool,andparfor) requires that you use the generic scheduler
interface.
findResource.
22
findResource Now Sets Properties According to Configuration
The findResouce function now sets the properties on the object it creates according to the configuration identified in the function call.
Page 27
Version 3.3 (R2008a) Parallel Computing Toolbox™ Software
Compatibility Considerations
In past releases, findResource could use a configuration to identify a scheduler, but did not apply the configuration settings to the scheduler object properties. If your code uses separate statements to find an object then set properties, this still works, but is not necessary any more.
parfor Syntax Has Single Usage
The parfor statement is now recognized only for parallel for-loops, not for loops over a distributed range in parallel jobs.
Compatibility Considerations
In R2007b, the pre-existing form of parfor was replaced by for i=
(drange) parfor has only one context, so parfor statements used in parallel jobs in
code fo r versions prior to R2007a must be modified to use
Limitations
, but both forms of syntax were recognized in that release. Now
for (drange).
P-Code Scripts. You can call P-code script files from w ithin a parfor-loop, but P-code script cannot contain a
sim Inside parfor-Loops. Running simulations in a
sim comman d at the top level of the loop is n ot allowed in this release. A sim
command visible in a parfor-loop generates an error, although you can call
sim inside a function that is called from the loop. Be sure that the various labs
running simulations do not have the same working directory, as interference can occur with the simulation data.
parfor-loop.
parfor-loop with the
dfeval Now Destroys Its Job When F inished
When finishe d performing its distributed evaluation, the dfeval function now destroys the job it created.
Compatibility Considerations
If you have any scripts that rely on a job and its data still existing after the completion of no longer work.
dfeval, or that destroy the job after dfeval, these scripts will
23
Page 28
Parallel Computing Toolbox™ Release Notes
Version 3.2 (R2007b) Distributed Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“New Parallel for-Loops (parfor-Loops)” on page 24
“Configurations Manager and Dialogs” on page 25
“Default Configuration” on page 26
“Parallel Profiler” on page 26
“MDCE Script for Red Hat Removed” on page 26
arizes what is new in Version 3.2 (R2007b):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
No
New Parallel for-Loops (parfor-Loops)
New parallel for-loop (parfor -loop) functionality automatically executes a loop body in parallel on dynamically allocated cluster resources, allowing interleaved serial and parallel code. For details of new parfor functionality, see “Parallel for-Loops (parfor)” in the Distributed Computing Toolbox™ documentation.
24
itations
Lim
ode Scripts. You can call P-code script files from within a
P-C
P-code script cannot contain a
but
parfor-loop.
parfor-loop,
Page 29
Version 3.2 (R2007b) Distributed Computing Toolbox™ Software
Compatibility Considerations
In past releases, parfor was a different function. The new parfor uses parentheses in defining its range to distinguish it from the old
New
parfor:
parfor (ii = 1:N); <body of code>; end;
Old parfor:
parfor ii = 1:N; <body of code>; end;
For this release, the old form of parfor without parentheses is still supported, although it generates a warning. You can read more about the new form of this existing functionality in “Using a for-Loop Over a Distributed Range (for-drange)”. You should update your existing form of
for-loops over a distributed range (for-drange), thus,
for ii = drange(1:N); <body of code>; end;
parfor code to use the new
parfor.
Configurations Manager and Dialogs
This release introduces a new graphical user interface for creating and modifying user configurations, and for desig n ating the default configuration used by some toolbox functions. For details about the configurations manager, see “Programming with User Configurations” in the Distributed Computing Toolbox documentation.
Compatibility Considerations
This new feature has no impact on how configurations are used in a program, only on how configurations are created and shared among users. In previous versions of the product, you modified your configurations by editing the file
matlabroot/toolbox/distcomp/user/distcompUserConfig.m.Nowthe
configuration data is stored as part of your MATLAB software preferences.
The new configurations manager cannot directly import old-style configurations that were defined in the However, a utility called
importDistcompUserConfig, available on
the MATLAB Central Web site, allows you to convert and import your existing configurations into the new configurations manager.
distcompUserConfig.m file.
25
Page 30
Parallel Computing Toolbox™ Release Notes
Visit http://www.mathworks.com/matlabcentral and search for
importDistcompUserConfig.
Default Configuration
This version of the toolbox enables you to select a user configuration to use as the default. Thus, commands such as default configuration without y our having to specify it each time you run the command. You can set the default configuration using the configurations graphical interface, or programmatically with the function.
Parallel Profiler
A new parallel profiler graphical user interface generates reports on lab computation and communication times during execution of parallel jobs. For details about this new feature, see “Using the P arallel Profiler”.
MDCE Script for Red Hat Removed
The MDCE script rh_mdce, specific to Red Hat Linux®,hasbeenremoved from
matlabroot/toolbox/distcomp/util/bin.
pmode and matlabpool will use the
defaultParallelConfig
26
Compatibility Considerations
If you make use of this script, you must replace it w ith its more generic equivalent,
matlabroot/toolbox/distcomp/bin/mdce.
Page 31
Version 3.1 (R2007a) Distributed Computing Toolbox™ Software
Version 3.1 (R2007a) Distributed Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“Local Scheduler and Workers” on page 27
“New pmode Interface” on page 28
“New Default Scheduler for pmode” on page 28
“Vectorized Task Creation” on page 28
“Additional Submit and Decode Scripts” on page 29
“Jobs Property of Job Manager Sorts Jobs by ID” on page 29
“New Object Display Format” on page 30
arizes what is new in Version 3.1 (R2007a):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
No
“Enhanced MATLAB Functions” on page 30
“darray Function Replaces distributor Function” on page 30
“rand Seeding Unique for Each Task or Lab” on page 31
“Single-Threaded Computations on Workers” on page 31
Local Scheduler and Workers
A local scheduler allows you to schedule jobs and run up to four workers or labs on a single MATLAB client machine without requiring engine licenses. These workers/labs can run distributed jobs or parallel jobs, including pmode sessions, for all products for which the MATLAB client is licensed. This
27
Page 32
Parallel Computing Toolbox™ Release Notes
local scheduler and its workers do not require a job manager or third-party scheduler.
New pmode Interface
The interactive parallel mode (pmode) has a new interface. The pmode command input and displays of the lab outputs are provided in a user interface that you can separate from the MATLAB client Command Window.
Compatibility Considerations
In previous versions of Distributed Computing Toolbox, the pmode interface used the MATLAB Command Window, with the pmode input using a different prompt. The output from the labs was intermingled with the MATLAB client output.
New D e fault Scheduler for pmode
If you start pmode without specifying a configuration,
28
pmode start
pmode automatically starts a parallel job using the local scheduler with labs running on the client machine. For more information about running pmode, see “Interactive Parallel Computation with pmode” in the Distributed Computing Toolbox d ocumentation.
Compatibility Considerations
In the previous version of the toolbox, when pm ode was started without specifying a configuration, it searched the n etwork for the first available job manager to use as a scheduler.
Vectorized Task Creation
The createTask function can now create a vector of tasks in a single call when you provide a cell array of cell arrays for input arguments. For full details, see the
createTask reference page.
Page 33
Version 3.1 (R2007a) Distributed Computing Toolbox™ Software
Compatibility Considerations
In previous versions of the distribut ed computing products, if your task function had an input argument that was a cell array of cell arrays, your code will need to be modified to run the same way in this release.
For example, your old code may have been written as follows so that the function
In this new version, the same code will produce four tasks. To get the old functionality, you must wrap the four cell arrays in another cell array, so that
createTask knows to create only one task.
myfun gets four cell array input argum ents:
createTask(j, @myfun, 1, {{C1} {C2} {C3} {C4}})
createTask(j, @myfun, 1, { {{C1} {C2} {C3} {C4}} })
Additional Submit and Decode Scripts
There are several submit and decode functions provided with the to olbox for your use with the generic scheduler interface. These files are in the directory
matlabroot/toolbox/distcomp/examples/integration
This version of the toolbox includes new subdirectories for Platform LSF and PBS, to support network configurations in which the client and worker computers do not share a file system. For more information, see “Supplied Submit and Decode Functions” in the Distributed Computing Toolbox documentation.
Jobs Property of Job Manager Sorts Jobs by ID
The Jobs property of a job manager o bject now contains the jobs in the order in which they were created, as indicated by the Similarly, the otherwise specified. This change makes job manager behavior consistent with the behavior of third-party schedulers.
findJob function returns jobs sequenced by their ID,unless
Compatibility Considerations
In previous versions of the distributed computing products, when using a job manager, jobs were arranged in the the status of the job.
Jobs property or by findJob according to
ID property of each job.
29
Page 34
Parallel Computing Toolbox™ Release Notes
New Object Displ
When you create d without a semico displayed in a n
display functi
command line.
istributed computing objects (scheduler, job, or task)
lon at the end of the command, the object information is
ew format. This new format is also shown when you use the
on to view an object or simply type the object name at the
ay Format
Compatibility Considerations
With this enh has changed.
Enhanced MA
Several MA
cat
find
horzcat
subsind
vertcat
For a co
buted arrays, see “Using MATLAB Functions on Codistributed Arrays”
distri in the D
ancement, the output format shown when creating an object
TLAB Functions
TLAB functions have been enhanced to work on distributed arrays:
ex
mplete list of MATLAB functions that are enhanced to work on
istributed Computing Toolbox documentation.
30
darra
The fu in a p
y Function Replaces distributor Function
nction
arallel job.
darray now defines how an array is distributed among the labs
Compatibility Considerations
he previous version of the toolbox, the
In t
ine how an array was distributed. In many cases, you can replace a call to
def
tributor
dis
hout arguments as an input to an array constructor,
wit
with a call to darray. For example, if you used distributor
distributor function was used to
Page 35
Version 3.1 (R2007a) Distributed Computing Toolbox™ Software
rand(m, n, distributor());
youcanupdatethecodetoread,
rand(m, n, darray());
rand Seeding Unique for Each Task or Lab
The random generator seed is now initialized based on the task ID for distributed jobs, or the ensures that the set of random numbers generated for each task or lab within a job is unique, even when you have more than 82 tasks or labs.
Compatibility Considerations
In the previous version of the distributed computing products, the rand function could by default generate the same set of numbers for some tasks or labs when these exceeded 82 for a job.
labindex for parallel jobs (including pmode). This
Single-Threaded Computations on Workers
Despite the ability in MATLAB software to perform multithreaded computations on multiple-CPU machines, the workers and labs running distributed and parallel jobs perform only single-threaded computations, so that multiprocessor cluster machines can better accommodate multiple workers or labs.
31
Page 36
Parallel Computing Toolbox™ Release Notes
Version 3.0 (R2006b) Distributed Computing Toolbox Software
New Features and Changes
Yes Details below
This table summ
Version Compatibility Considerations
Yes — Details as Compatibi
Considerat
below. See a Summary.
New features and changes introduced in this version are
“Support for Windows Compute Cluster Server (CCS)” on page 32
“Windows 64 Support” on page 33
“Parallel Job Enhancements” on page 33
“Distributed Arrays” on page 33
“Interactive Parallel Mode (pmode)” on page 34
“Moved MDCE Control Scripts” on page 34
“rand Seeding Unique for Each Task or Lab” on page 35
arizes what is new in Version 3.0 (R2006b):
labeled
lity
ions,
lso
Fixed Bugs an d Known Problems
Bug Reports Includes fixes
Related Documentation at Web Site
No
32
“Task ID Property Now Same as labindex” on page 36
Support for Windows Compute Cluster Server (CCS)
Distributed Computing Toolbox software and MATLAB®Distributed Computing Engine™ software now let you program jobs and run them on a Windows Compute Cluster Server. For information about programming in the toolbox to use Windows Compute Cluster Server (CCS) as your scheduler, see the Server Scheduler”.
findResource reference page, and see also “Find a Windows HPC
Page 37
Version 3.0 (R2006b) Distributed Computing Toolbox™ Software
Windows 64 Suppo
The distributed both MATLAB clie
Parallel Job E
computing products now support Windows 64 (Win64) for
nt and MATLAB worker machines.
nhancements
rt
Parallel Jobs Support Any Scheduler
Support for p releases, on supported p by a job mana LSF, mpiexe informatio
arallel jobs now extends to any type of scheduler. In previous ly the M athW orks job manager and mpiexec scheduler object arallel jobs. You can now run parallel jobs on clusters scheduled
ger, Windows Compute Cluster S erver (CCS), Platform
c, or using the generic scheduler interface. For programming
n, see “Programming Parallel Jobs”.
New labSendReceive Function
The labSe performs reduces t simulta see the
neously rather than by separate statements. For more information,
l
ndReceive
thesamethingsasboth
he risk of deadlock, because the send and receive happen
abSendReceive
function is introduced in this release. This function
labSend and labReceive,butgreatly
reference page.
Improved Error Detection
This re labs ru detec
lease o ffers improved error detection for miscommunication between nning parallel jobs. Most notable among the improvements are error
tion of mismatched
labSend and labReceive statements.
Distr
Dist in th to wo and p esp Cod
ibuted Arrays
ributed arrays are partitioned into segments, with each segment residing
e workspace of a different lab, so that each lab has its o wn array segment
rk with. Reducing the size of the array that each lab has to store rocess means a more efficient use of memory and faster processing,
ecially for large data sets. For more information, see “Working with
istributed Arrays”.
33
Page 38
Parallel Computing Toolbox™ Release Notes
There are many new a nd enhanced MATLAB functions to work with distributed arrays in parallel jobs. For a listing of these functions and their reference pages, see “Job Management”.
parfor: Parallel for-Loops
Parallel for-loops let you run a for-loop across your labs simultaneously. For more information, see “Using a for-Loop Over a Distributed Range (for-drange)” or the
Interactive Parallel Mode (pmode)
The interactive parallel mode (pmode) lets you work interactively with a parallel job running simultaneously on a number of labs. Commands you type at the pmode command line are executed on all labs at the same time. Each lab executes the commands in its own workspace on its own local variables or segments of distributed arrays. For more information, see “Getting Started with pmode”.
parfor reference page.
34
Moved MDCE Control Scripts
To provide greater consistency across all platforms, the MDCE control scripts for Windows have moved and those for UNIX and Macintosh have new names.
Compatibility Considerations
Windows Utilities Moved. In previous versions of the distributed
computing products, the MDCE utilities for Windows computers were located in
matlabroot\toolbox\distcomp\bin\win32
The utilities are now located in
matlabroot\toolbox\distcomp\bin
The files that have moved are
nodestatus mdce startjobmanager
Page 39
Version 3.0 (R2006b) Distributed Computing Toolbox™ Software
stopjobmanager startworker stopworker mdce_def.bat
UNIX and Macintosh Utilities Renamed. In previous versions of the distributed computing products, the MDCE utilities for UNIX and Macintosh computers were called by
nodestatus.sh startjobmanager.sh stopjobmanager.sh startworker.sh stopworker.sh
You can now call these with the following commands:
nodestatus startjobmanager stopjobmanager startworker stopworker
Note For UNIX and Macintosh, mdce and mdce_def.sh have not been moved or renamed.
rand Seeding Unique for Each Task or Lab
The random generator seed is now initialized based on the task ID for distributed jobs, or the ensures that the random numbers generated for each task or lab are unique within a job.
Compatibility Considerations
In previous versions of the distributed computing products, the rand function would by default generate the same set of numbers on each worker.
labindex for parallel jobs (including pmode). This
35
Page 40
Parallel Computing Toolbox™ Release Notes
Task ID Property
Although you cre task for each w or four workers (l The first task i lab whose and
labi
labindex f
the sequence o corresponds
Tasks proper
ate only one task for a parallel job, the system copies this
ker that runs the job. For example, if a parallel job runs on abs), the nthejob’s
ndex
or the lab that ran that task have the same value. Therefore,
f results returned by the
to the value of
ty.
Now Same as labindex
Tasks property of the job contains four task objects.
Tasks property corresponds to the task run by the
is 1, and so on, so that the ID property for the task object
labindex and to the order of tasks in the job’s
Compatibility Considerations
In past rele
ID propert
ases, there was no correlation between
y.
getAllOutputArguments function
labindex and the task
36
Page 41
Compatibility Summary for Parallel Computing Toolbox™ Software
Compatibility Summary for Parallel Computing Toolbox Software
This table summarizes new features and changes that might cause incompatibilities when you upgrade from an earlier version, or wh en you use files on multiple versions. Details are provided with the description of the new feature or change.
Version (Rele ase) New Features and Ch a n g es with Version
Compatibility Impact
Latest Version V4.3 (R2010a)
V4.2 (R2009b) See the Compatibility Considerations
See the Compatibility Considerations subheading for this change:
“taskFinish File for MATLAB Pool” on page 5
“Upgrade Parallel Computing Products
Together” on page 5
subheading for each of these new features or changes:
“Renamed codistributor Functions” on page 8
“Updated globalIndices Function” on page 10
“Support for Job Templates and Description
Files with HPC Server 2008” on page 11
“pctconfig Enhanced to Support Range of
Ports” on page 11
“Random Number Generator on Client Versus
Workers” on page 12
37
Page 42
Parallel Computing Toolbox™ Release Notes
Version (Rele ase) New Features and Ch a n g es with Version
V4.1 (R2009a) See the Compatibility Considerations
V4.0 (R2008b) See the Compatibility Considerations
Compatibility Impact
subheading for each of these new features or changes:
“Number of Local Workers Increased to Eight”
on page 13
“Admin Center Allows Controllin g of Cluster
Resources” on page 14
subheading for each of these new features or changes:
“spmd Construct” on page 17
“Changed Function Names for Codistributed
Arrays” on page 19
38
V3.3 (R2008a) See the Compatibility Considerations
subheading for each of these new features or changes:
“Renamed Functions for Product Name
Changes” on page 20
“Changed Function Names for Distributed
Arrays” on page 21
“findResource Now Sets Properties According
to Configuration” on page 22
“parfor Syntax Has Single Usage” on page 23
“dfeval Now Destroys Its Job When Finished”
on page 23
Page 43
Compatibility Summary for Parallel Computing Toolbox™ Software
Version (Rele ase) New Features and Ch a n g es with Version
Compatibility Impact
V3.2 (R2007b) See the Compatibility Considerations
subheading for each of these new features or changes:
“New Parallel for-Loops (parfor-Loops)” on
page 24
“Configurations Manager and Dialogs” on
page 25
“MDCE Script for Red Hat Removed” on page
26
V3.1 (R2007a) See the Compatibility Considerations
subheading for each of these new features or changes:
“New pmode Interface” on page 28
“New Default Scheduler for pmode” on page
28
“Vectorized Task Creation” on page 28
“Jobs Property of Job Manager Sorts Jobs by
ID” on page 29
“New Object Display Format” on page 30
“darray Function Replaces distributor
Function” on page 30
“rand Seeding Unique for Each Task or Lab”
on page 31
V3.0 (R2006b) See the Compatibility Considerations
subheading for each of these new features or changes:
“Moved MDCE Control Scripts” on page 34
“rand Seeding Unique for Each Task or Lab”
on page 35
39
Loading...