pzgebrd man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

PZGEBRD(l)			       )			    PZGEBRD(l)

NAME
       PZGEBRD - reduce a complex general M-by-N distributed matrix sub( A ) =
       A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by  an  uni‐
       tary transformation

SYNOPSIS
       SUBROUTINE PZGEBRD( M,  N,  A,  IA,  JA, DESCA, D, E, TAUQ, TAUP, WORK,
			   LWORK, INFO )

	   INTEGER	   IA, INFO, JA, LWORK, M, N

	   INTEGER	   DESCA( * )

	   DOUBLE	   PRECISION D( * ), E( * )

	   COMPLEX*16	   A( * ), TAUP( * ), TAUQ( * ), WORK( * )

PURPOSE
       PZGEBRD reduces a complex general M-by-N distributed matrix sub( A )  =
       A(IA:IA+M-1,JA:JA+N-1)  to  upper or lower bidiagonal form B by an uni‐
       tary transformation: Q' * sub( A ) * P = B.  If M  >=  N,  B  is	 upper
       bidiagonal; if M < N, B is lower bidiagonal.

       Notes
       =====

       Each  global data object is described by an associated description vec‐
       tor.  This vector stores the information required to establish the map‐
       ping between an object element and its corresponding process and memory
       location.

       Let A be a generic term for any 2D block	 cyclicly  distributed	array.
       Such a global array has an associated description vector DESCA.	In the
       following comments, the character _ should be read as  "of  the	global
       array".

       NOTATION	       STORED IN      EXPLANATION
       ---------------	--------------	--------------------------------------
       DTYPE_A(global) DESCA( DTYPE_ )The descriptor type.  In this case,
				      DTYPE_A = 1.
       CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating
				      the BLACS process grid A is distribu-
				      ted over. The context itself is glo-
				      bal, but the handle (the integer
				      value) may vary.
       M_A    (global) DESCA( M_ )    The number of rows in the global
				      array A.
       N_A    (global) DESCA( N_ )    The number of columns in the global
				      array A.
       MB_A   (global) DESCA( MB_ )   The blocking factor used to distribute
				      the rows of the array.
       NB_A   (global) DESCA( NB_ )   The blocking factor used to distribute
				      the columns of the array.
       RSRC_A (global) DESCA( RSRC_ ) The process row over which the first
				      row  of  the  array  A  is  distributed.
       CSRC_A (global) DESCA( CSRC_ ) The process column over which the
				      first column of the array A is
				      distributed.
       LLD_A  (local)  DESCA( LLD_ )  The leading dimension of the local
				      array.  LLD_A >= MAX(1,LOCr(M_A)).

       Let  K  be  the	number of rows or columns of a distributed matrix, and
       assume that its process grid has dimension p x q.
       LOCr( K ) denotes the number of elements of  K  that  a	process	 would
       receive	if K were distributed over the p processes of its process col‐
       umn.
       Similarly, LOCc( K ) denotes the number of elements of K that a process
       would receive if K were distributed over the q processes of its process
       row.
       The values of LOCr() and LOCc() may be determined via  a	 call  to  the
       ScaLAPACK tool function, NUMROC:
	       LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ),
	       LOCc(  N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ).  An upper
       bound for these quantities may be computed by:
	       LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A
	       LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A

ARGUMENTS
       M       (global input) INTEGER
	       The number of rows to be operated on, i.e. the number  of  rows
	       of the distributed submatrix sub( A ). M >= 0.

       N       (global input) INTEGER
	       The  number  of	columns	 to be operated on, i.e. the number of
	       columns of the distributed submatrix sub( A ). N >= 0.

       A       (local input/local output) COMPLEX*16 pointer into the
	       local memory to an array of dimension (LLD_A,LOCc(JA+N-1)).  On
	       entry, this array contains the local pieces of the general dis‐
	       tributed matrix sub( A ). On exit, if M >= N, the diagonal  and
	       the  first  superdiagonal  of sub( A ) are overwritten with the
	       upper bidiagonal matrix B; the  elements	 below	the  diagonal,
	       with  the array TAUQ, represent the unitary matrix Q as a prod‐
	       uct of elementary reflectors, and the elements above the	 first
	       superdiagonal,  with  the  array TAUP, represent the orthogonal
	       matrix P as a product of elementary reflectors. If M <  N,  the
	       diagonal	 and  the  first  subdiagonal are overwritten with the
	       lower bidiagonal matrix B; the elements below the first	subdi‐
	       agonal,	with the array TAUQ, represent the unitary matrix Q as
	       a product of elementary reflectors, and the elements above  the
	       diagonal,  with the array TAUP, represent the orthogonal matrix
	       P as a product of elementary reflectors. See  Further  Details.
	       IA	(global	 input)	 INTEGER  The  row index in the global
	       array A indicating the first row of sub( A ).

       JA      (global input) INTEGER
	       The column index in the global array  A	indicating  the	 first
	       column of sub( A ).

       DESCA   (global and local input) INTEGER array of dimension DLEN_.
	       The array descriptor for the distributed matrix A.

       D       (local output) DOUBLE PRECISION array, dimension
	       LOCc(JA+MIN(M,N)-1)  if	M >= N; LOCr(IA+MIN(M,N)-1) otherwise.
	       The distributed diagonal elements of the bidiagonal  matrix  B:
	       D(i) = A(i,i). D is tied to the distributed matrix A.

       E       (local output) DOUBLE PRECISION array, dimension
	       LOCr(IA+MIN(M,N)-1)  if	M >= N; LOCc(JA+MIN(M,N)-2) otherwise.
	       The distributed off-diagonal elements of	 the  bidiagonal  dis‐
	       tributed	 matrix	 B:  if	 m  >=	n,  E(i)  =  A(i,i+1)  for i =
	       1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1.   E
	       is tied to the distributed matrix A.

       TAUQ    (local output) COMPLEX*16 array dimension
	       LOCc(JA+MIN(M,N)-1).  The  scalar  factors  of  the  elementary
	       reflectors which represent the unitary matrix Q. TAUQ  is  tied
	       to  the	distributed  matrix  A.	 See  Further  Details.	  TAUP
	       (local output) COMPLEX*16 array, dimension LOCr(IA+MIN(M,N)-1).
	       The scalar factors of the elementary reflectors which represent
	       the unitary matrix P. TAUP is tied to the distributed matrix A.
	       See  Further  Details.	WORK	(local workspace/local output)
	       COMPLEX*16 array, dimension (LWORK) On exit, WORK( 1 )  returns
	       the minimal and optimal LWORK.

       LWORK   (local or global input) INTEGER
	       The dimension of the array WORK.	 LWORK is local input and must
	       be at least LWORK >= NB*( MpA0 + NqA0 + 1 ) + NqA0

	       where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD(
	       JA-1,  NB  ),  IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ),
	       IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC(
	       M+IROFFA,  NB,  MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA,
	       NB, MYCOL, IACOL, NPCOL ).

	       INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW,	MYCOL,
	       NPROW  and  NPCOL  can  be determined by calling the subroutine
	       BLACS_GRIDINFO.

	       If LWORK = -1, then LWORK is global input and a workspace query
	       is assumed; the routine only calculates the minimum and optimal
	       size for all work arrays. Each of these values is  returned  in
	       the  first  entry of the corresponding work array, and no error
	       message is issued by PXERBLA.

       INFO    (global output) INTEGER
	       = 0:  successful exit
	       < 0:  If the i-th argument is an array and the j-entry  had  an
	       illegal	value, then INFO = -(i*100+j), if the i-th argument is
	       a scalar and had an illegal value, then INFO = -i.

FURTHER DETAILS
       The matrices Q and P are represented as products of elementary  reflec‐
       tors:

       If m >= n,

	  Q = H(1) H(2) . . . H(n)  and	 P = G(1) G(2) . . . G(n-1)

       Each H(i) and G(i) has the form:

	  H(i) = I - tauq * v * v'  and G(i) = I - taup * u * u'

       where  tauq  and taup are complex scalars, and v and u are complex vec‐
       tors;
       v(1:i-1)	 =  0,	v(i)  =	 1,  and  v(i+1:m)  is	stored	on   exit   in
       A(ia+i:ia+m-1,ja+i-1);
       u(1:i)	=   0,	u(i+1)	=  1,  and  u(i+2:n)  is  stored  on  exit  in
       A(ia+i-1,ja+i+1:ja+n-1);
       tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1).

       If m < n,

	  Q = H(1) H(2) . . . H(m-1)  and  P = G(1) G(2) . . . G(m)

       Each H(i) and G(i) has the form:

	  H(i) = I - tauq * v * v'  and G(i) = I - taup * u * u'

       where tauq and taup are complex scalars, and v and u are	 complex  vec‐
       tors;
       v(1:i)	=   0,	v(i+1)	=  1,  and  v(i+2:m)  is  stored  on  exit  in
       A(ia+i+1:ia+m-1,ja+i-1);
       u(1:i-1)	 =  0,	u(i)  =	 1,  and  u(i+1:n)  is	stored	on   exit   in
       A(ia+i-1,ja+i:ja+n-1);
       tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1).

       The contents of sub( A ) on exit are illustrated by the following exam‐
       ples:

       m = 6 and n = 5 (m > n):		 m = 5 and n = 6 (m < n):

	 (  d	e   u1	u1  u1 )	   (  d	  u1  u1  u1  u1  u1 )
	 (  v1	d   e	u2  u2 )	   (  e	  d   u2  u2  u2  u2 )
	 (  v1	v2  d	e   u3 )	   (  v1  e   d	  u3  u3  u3 )
	 (  v1	v2  v3	d   e  )	   (  v1  v2  e	  d   u4  u4 )
	 (  v1	v2  v3	v4  d  )	   (  v1  v2  v3  e   d	  u5 )
	 (  v1	v2  v3	v4  v5 )

       where d and e denote  diagonal  and  off-diagonal  elements  of	B,  vi
       denotes	an  element  of the vector defining H(i), and ui an element of
       the vector defining G(i).

       Alignment requirements
       ======================

       The distributed submatrix sub( A ) must verify some  alignment  proper-
       ties, namely the following expressions should be true:
       ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA )

ScaLAPACK version 1.7		13 August 2001			    PZGEBRD(l)
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net