1 \section{Structural Analysis and Shape Descriptors}
5 \cvCPyFunc{ApproxChains}
6 Approximates Freeman chain(s) with a polygonal curve.
9 CvSeq* cvApproxChains( \par CvSeq* src\_seq,\par CvMemStorage* storage,\par int method=CV\_CHAIN\_APPROX\_SIMPLE,\par double parameter=0,\par int minimal\_perimeter=0,\par int recursive=0 );
10 }\cvdefPy{ApproxChains(src\_seq,storage,method=CV\_CHAIN\_APPROX\_SIMPLE,parameter=0,minimal\_perimiter=0,recursive=0)-> chains}
13 \cvarg{src\_seq}{Pointer to the chain that can refer to other chains}
14 \cvarg{storage}{Storage location for the resulting polylines}
15 \cvarg{method}{Approximation method (see the description of the function \cvCPyCross{FindContours})}
16 \cvarg{parameter}{Method parameter (not used now)}
17 \cvarg{minimal\_perimeter}{Approximates only those contours whose perimeters are not less than \texttt{minimal\_perimeter}. Other chains are removed from the resulting structure}
18 \cvarg{recursive}{If not 0, the function approximates all chains that access can be obtained to from \texttt{src\_seq} by using the \texttt{h\_next} or \texttt{v\_next links}. If 0, the single chain is approximated}
21 This is a stand-alone approximation routine. The function \texttt{cvApproxChains} works exactly in the same way as \cvCPyCross{FindContours} with the corresponding approximation flag. The function returns pointer to the first resultant contour. Other approximated contours, if any, can be accessed via the \texttt{v\_next} or \texttt{h\_next} fields of the returned structure.
23 \cvCPyFunc{ApproxPoly}
24 Approximates polygonal curve(s) with the specified precision.
27 CvSeq* cvApproxPoly( \par const void* src\_seq,\par int header\_size,\par CvMemStorage* storage,\par int method,\par double parameter,\par int parameter2=0 );
29 ApproxPoly(src\_seq, storage, method, parameter=0, parameter2=0)
33 \cvarg{src\_seq}{Sequence of an array of points}
34 \cvarg{header\_size}{Header size of the approximated curve[s]}
35 \cvarg{storage}{Container for the approximated contours. If it is NULL, the input sequences' storage is used}
36 \cvarg{method}{Approximation method; only \texttt{CV\_POLY\_APPROX\_DP} is supported, that corresponds to the Douglas-Peucker algorithm}
37 \cvarg{parameter}{Method-specific parameter; in the case of \texttt{CV\_POLY\_APPROX\_DP} it is a desired approximation accuracy}
38 \cvarg{parameter2}{If case if \texttt{src\_seq} is a sequence, the parameter determines whether the single sequence should be approximated or all sequences on the same level or below \texttt{src\_seq} (see \cvCPyCross{FindContours} for description of hierarchical contour structures). If \texttt{src\_seq} is an array CvMat* of points, the parameter specifies whether the curve is closed (\texttt{parameter2}!=0) or not (\texttt{parameter2} =0)}
41 The function approximates one or more curves and
42 returns the approximation result[s]. In the case of multiple curves,
43 the resultant tree will have the same structure as the input one (1:1
47 Calculates the contour perimeter or the curve length.
50 double cvArcLength( \par const void* curve,\par CvSlice slice=CV\_WHOLE\_SEQ,\par int is\_closed=-1 );
51 }\cvdefPy{ArcLength(curve,slice=CV\_WHOLE\_SEQ,is\_closed=-1)-> double}
54 \cvarg{curve}{Sequence or array of the curve points}
55 \cvarg{slice}{Starting and ending points of the curve, by default, the whole curve length is calculated}
56 \cvarg{is\_closed}{Indicates whether the curve is closed or not. There are 3 cases:
58 \item $\texttt{is\_closed} =0$ the curve is assumed to be unclosed.
59 \item $\texttt{is\_closed}>0$ the curve is assumed to be closed.
60 \item $\texttt{is\_closed}<0$ if curve is sequence, the flag \texttt{CV\_SEQ\_FLAG\_CLOSED} of \texttt{((CvSeq*)curve)->flags} is checked to determine if the curve is closed or not, otherwise (curve is represented by array (CvMat*) of points) it is assumed to be unclosed.
64 The function calculates the length or curve as the sum of lengths of segments between subsequent points
66 \cvCPyFunc{BoundingRect}
67 Calculates the up-right bounding rectangle of a point set.
70 CvRect cvBoundingRect( CvArr* points, int update=0 );
71 }\cvdefPy{BoundingRect(points,update=0)-> CvRect}
74 \cvarg{points}{2D point set, either a sequence or vector (\texttt{CvMat}) of points}
75 \cvarg{update}{The update flag. See below.}
78 The function returns the up-right bounding rectangle for a 2d point set.
79 Here is the list of possible combination of the flag values and type of \texttt{points}:
81 \begin{tabular}{|c|c|p{3in}|}
83 update & points & action \\ \hline
84 0 & \texttt{CvContour\*} & the bounding rectangle is not calculated, but it is taken from \texttt{rect} field of the contour header.\\ \hline
85 1 & \texttt{CvContour\*} & the bounding rectangle is calculated and written to \texttt{rect} field of the contour header.\\ \hline
86 0 & \texttt{CvSeq\*} or \texttt{CvMat\*} & the bounding rectangle is calculated and returned.\\ \hline
87 1 & \texttt{CvSeq\*} or \texttt{CvMat\*} & runtime error is raised.\\ \hline
91 Finds the box vertices.
94 void cvBoxPoints( \par CvBox2D box,\par CvPoint2D32f pt[4] );
95 }\cvdefPy{BoxPoints(box)-> points}
99 \cvarg{pt}{Array of vertices}
102 The function calculates the vertices of the input 2d box. Here is the function code:
105 void cvBoxPoints( CvBox2D box, CvPoint2D32f pt[4] )
107 float a = (float)cos(box.angle)*0.5f;
108 float b = (float)sin(box.angle)*0.5f;
110 pt[0].x = box.center.x - a*box.size.height - b*box.size.width;
111 pt[0].y = box.center.y + b*box.size.height - a*box.size.width;
112 pt[1].x = box.center.x + a*box.size.height - b*box.size.width;
113 pt[1].y = box.center.y - b*box.size.height - a*box.size.width;
114 pt[2].x = 2*box.center.x - pt[0].x;
115 pt[2].y = 2*box.center.y - pt[0].y;
116 pt[3].x = 2*box.center.x - pt[1].x;
117 pt[3].y = 2*box.center.y - pt[1].y;
122 Calculates a pair-wise geometrical histogram for a contour.
125 void cvCalcPGH( const CvSeq* contour, CvHistogram* hist );
126 }\cvdefPy{CalcPGH(contour,hist)-> None}
129 \cvarg{contour}{Input contour. Currently, only integer point coordinates are allowed}
130 \cvarg{hist}{Calculated histogram; must be two-dimensional}
133 The function calculates a
134 2D pair-wise geometrical histogram (PGH), described in
135 \cvCPyCross{Iivarinen97}
136 for the contour. The algorithm considers every pair of contour
137 edges. The angle between the edges and the minimum/maximum distances
138 are determined for every pair. To do this each of the edges in turn
139 is taken as the base, while the function loops through all the other
140 edges. When the base edge and any other edge are considered, the minimum
141 and maximum distances from the points on the non-base edge and line of
142 the base edge are selected. The angle between the edges defines the row
143 of the histogram in which all the bins that correspond to the distance
144 between the calculated minimum and maximum distances are incremented
145 (that is, the histogram is transposed relatively to the \cvCPyCross{Iivarninen97}
146 definition). The histogram can be used for contour matching.
149 Computes the "minimal work" distance between two weighted point configurations.
152 float cvCalcEMD2( \par const CvArr* signature1,\par const CvArr* signature2,\par int distance\_type,\par CvDistanceFunction distance\_func=NULL,\par const CvArr* cost\_matrix=NULL,\par CvArr* flow=NULL,\par float* lower\_bound=NULL,\par void* userdata=NULL );
153 }\cvdefPy{CalcEMD2(signature1, signature2, distance\_type, distance\_func = None, cost\_matrix=None, flow=None, lower\_bound=None, userdata = None) -> float}
156 typedef float (*CvDistanceFunction)(const float* f1, const float* f2, void* userdata);
160 \cvarg{signature1}{First signature, a $\texttt{size1}\times \texttt{dims}+1$ floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used}
161 \cvarg{signature2}{Second signature of the same format as \texttt{signature1}, though the number of rows may be different. The total weights may be different, in this case an extra "dummy" point is added to either \texttt{signature1} or \texttt{signature2}}
162 \cvarg{distance\_type}{Metrics used; \texttt{CV\_DIST\_L1, CV\_DIST\_L2}, and \texttt{CV\_DIST\_C} stand for one of the standard metrics; \texttt{CV\_DIST\_USER} means that a user-defined function \texttt{distance\_func} or pre-calculated \texttt{cost\_matrix} is used}
163 \cvarg{distance\_func}{The user-defined distance function. It takes coordinates of two points and returns the distance between the points}
164 \cvarg{cost\_matrix}{The user-defined $\texttt{size1}\times \texttt{size2}$ cost matrix. At least one of \texttt{cost\_matrix} and \texttt{distance\_func} must be NULL. Also, if a cost matrix is used, lower boundary (see below) can not be calculated, because it needs a metric function}
165 \cvarg{flow}{The resultant $\texttt{size1} \times \texttt{size2}$ flow matrix: $\texttt{flow}_{i,j}$ is a flow from $i$ th point of \texttt{signature1} to $j$ th point of \texttt{signature2}}
166 \cvarg{lower\_bound}{Optional input/output parameter: lower boundary of distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (i.e. the signature matrices have a single column). The user \textbf{must} initialize \texttt{*lower\_bound}. If the calculated distance between mass centers is greater or equal to \texttt{*lower\_bound} (it means that the signatures are far enough) the function does not calculate EMD. In any case \texttt{*lower\_bound} is set to the calculated distance between mass centers on return. Thus, if user wants to calculate both distance between mass centers and EMD, \texttt{*lower\_bound} should be set to 0}
167 \cvarg{userdata}{Pointer to optional data that is passed into the user-defined distance function}
170 The function computes the earth mover distance and/or
171 a lower boundary of the distance between the two weighted point
172 configurations. One of the applications described in \cvCPyCross{RubnerSept98} is
173 multi-dimensional histogram comparison for image retrieval. EMD is a a
174 transportation problem that is solved using some modification of a simplex
175 algorithm, thus the complexity is exponential in the worst case, though, on average
176 it is much faster. In the case of a real metric the lower boundary
177 can be calculated even faster (using linear-time algorithm) and it can
178 be used to determine roughly whether the two signatures are far enough
179 so that they cannot relate to the same object.
181 \cvCPyFunc{CheckContourConvexity}
182 Tests contour convexity.
185 int cvCheckContourConvexity( const CvArr* contour );
186 }\cvdefPy{CheckContourConvexity(contour)-> int}
189 \cvarg{contour}{Tested contour (sequence or array of points)}
192 The function tests whether the input contour is convex or not. The contour must be simple, without self-intersections.
194 \cvfunc{CvConvexityDefect}\label{CvConvexityDefect}
196 Structure describing a single contour convexity defect.
199 typedef struct CvConvexityDefect
201 CvPoint* start; /* point of the contour where the defect begins */
202 CvPoint* end; /* point of the contour where the defect ends */
203 CvPoint* depth_point; /* the farthest from the convex hull point within the defect */
204 float depth; /* distance between the farthest point and the convex hull */
208 % ===== Picture. Convexity defects of hand contour. =====
209 \includegraphics[width=0.5\textwidth]{pics/defects.png}
211 \cvCPyFunc{ContourArea}
212 Calculates the area of a whole contour or a contour section.
215 double cvContourArea( \par const CvArr* contour, \par CvSlice slice=CV\_WHOLE\_SEQ );
216 }\cvdefPy{ContourAres(contour,slice=CV\_WHOLE\_SEQ)-> double}
219 \cvarg{contour}{Contour (sequence or array of vertices)}
220 \cvarg{slice}{Starting and ending points of the contour section of interest, by default, the area of the whole contour is calculated}
223 The function calculates the area of a whole contour
224 or a contour section. In the latter case the total area bounded by the
225 contour arc and the chord connecting the 2 selected points is calculated
226 as shown on the picture below:
228 \includegraphics[width=0.5\textwidth]{pics/contoursecarea.png}
230 Orientation of the contour affects the area sign, thus the function may return a \emph{negative} result. Use the \texttt{fabs()} function from C runtime to get the absolute value of the area.
232 \cvCPyFunc{ContourFromContourTree}
233 Restores a contour from the tree.
236 CvSeq* cvContourFromContourTree( \par const CvContourTree* tree,\par CvMemStorage* storage,\par CvTermCriteria criteria );
237 }\cvdefPy{ContourFromContourTree(tree,storage,criteria)-> contour}
240 \cvarg{tree}{Contour tree}
241 \cvarg{storage}{Container for the reconstructed contour}
242 \cvarg{criteria}{Criteria, where to stop reconstruction}
245 The function restores the contour from its binary tree representation. The parameter \texttt{criteria} determines the accuracy and/or the number of tree levels used for reconstruction, so it is possible to build an approximated contour. The function returns the reconstructed contour.
247 \cvCPyFunc{ConvexHull2}
248 Finds the convex hull of a point set.
251 CvSeq* cvConvexHull2( \par const CvArr* input,\par void* hull\_storage=NULL,\par int orientation=CV\_CLOCKWISE,\par int return\_points=0 );
252 }\cvdefPy{ConvexHull2(points,storage,orientaton=CV\_CLOCKWISE,return\_points=0)-> convex\_hull}
255 \cvarg{points}{Sequence or array of 2D points with 32-bit integer or floating-point coordinates}
256 \cvarg{hull\_storage}{The destination array (CvMat*) or memory storage (CvMemStorage*) that will store the convex hull. If it is an array, it should be 1d and have the same number of elements as the input array/sequence. On output the header is modified as to truncate the array down to the hull size. If \texttt{hull\_storage} is NULL then the convex hull will be stored in the same storage as the input sequence}
257 \cvarg{orientation}{Desired orientation of convex hull: \texttt{CV\_CLOCKWISE} or \texttt{CV\_COUNTER\_CLOCKWISE}}
258 \cvarg{return\_points}{If non-zero, the points themselves will be stored in the hull instead of indices if \texttt{hull\_storage} is an array, or pointers if \texttt{hull\_storage} is memory storage}
261 The function finds the convex hull of a 2D point set using Sklansky's algorithm. If \texttt{hull\_storage} is memory storage, the function creates a sequence containing the hull points or pointers to them, depending on \texttt{return\_points} value and returns the sequence on output. If \texttt{hull\_storage} is a CvMat, the function returns NULL.
263 % ===== Example. Building convex hull for a sequence or array of points =====
269 #define ARRAY 0 /* switch between array/sequence method by replacing 0<=>1 */
271 void main( int argc, char** argv )
273 IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
274 cvNamedWindow( "hull", 1 );
277 CvMemStorage* storage = cvCreateMemStorage();
282 int i, count = rand()%100 + 1, hullcount;
285 CvSeq* ptseq = cvCreateSeq( CV_SEQ_KIND_GENERIC|CV_32SC2,
291 for( i = 0; i < count; i++ )
293 pt0.x = rand() % (img->width/2) + img->width/4;
294 pt0.y = rand() % (img->height/2) + img->height/4;
295 cvSeqPush( ptseq, &pt0 );
297 hull = cvConvexHull2( ptseq, 0, CV_CLOCKWISE, 0 );
298 hullcount = hull->total;
300 CvPoint* points = (CvPoint*)malloc( count * sizeof(points[0]));
301 int* hull = (int*)malloc( count * sizeof(hull[0]));
302 CvMat point_mat = cvMat( 1, count, CV_32SC2, points );
303 CvMat hull_mat = cvMat( 1, count, CV_32SC1, hull );
305 for( i = 0; i < count; i++ )
307 pt0.x = rand() % (img->width/2) + img->width/4;
308 pt0.y = rand() % (img->height/2) + img->height/4;
311 cvConvexHull2( &point_mat, &hull_mat, CV_CLOCKWISE, 0 );
312 hullcount = hull_mat.cols;
315 for( i = 0; i < count; i++ )
318 pt0 = *CV_GET_SEQ_ELEM( CvPoint, ptseq, i );
322 cvCircle( img, pt0, 2, CV_RGB( 255, 0, 0 ), CV_FILLED );
326 pt0 = **CV_GET_SEQ_ELEM( CvPoint*, hull, hullcount - 1 );
328 pt0 = points[hull[hullcount-1]];
331 for( i = 0; i < hullcount; i++ )
334 CvPoint pt = **CV_GET_SEQ_ELEM( CvPoint*, hull, i );
336 CvPoint pt = points[hull[i]];
338 cvLine( img, pt0, pt, CV_RGB( 0, 255, 0 ));
342 cvShowImage( "hull", img );
344 int key = cvWaitKey(0);
345 if( key == 27 ) // 'ESC'
349 cvClearMemStorage( storage );
358 \cvCPyFunc{ConvexityDefects}
359 Finds the convexity defects of a contour.
362 CvSeq* cvConvexityDefects( \par const CvArr* contour,\par const CvArr* convexhull,\par CvMemStorage* storage=NULL );
363 }\cvdefPy{ConvexityDefects(contour,convexhull,storage)-> convexity\_defects}
366 \cvarg{contour}{Input contour}
367 \cvarg{convexhull}{Convex hull obtained using \cvCPyCross{ConvexHull2} that should contain pointers or indices to the contour points, not the hull points themselves (the \texttt{return\_points} parameter in \cvCPyCross{ConvexHull2} should be 0)}
368 \cvarg{storage}{Container for the output sequence of convexity defects. If it is NULL, the contour or hull (in that order) storage is used}
371 The function finds all convexity defects of the input contour and returns a sequence of the CvConvexityDefect structures.
373 \cvCPyFunc{CreateContourTree}
374 Creates a hierarchical representation of a contour.
377 CvContourTree* cvCreateContourTree( \par const CvSeq* contour,\par CvMemStorage* storage,\par double threshold );
378 }\cvdefPy{CreateCountourTree(contour,storage,threshold)-> contour\_tree}
381 \cvarg{contour}{Input contour}
382 \cvarg{storage}{Container for output tree}
383 \cvarg{threshold}{Approximation accuracy}
386 The function creates a binary tree representation for the input \texttt{contour} and returns the pointer to its root. If the parameter \texttt{threshold} is less than or equal to 0, the function creates a full binary tree representation. If the threshold is greater than 0, the function creates a representation with the precision \texttt{threshold}: if the vertices with the interceptive area of its base line are less than \texttt{threshold}, the tree should not be built any further. The function returns the created tree.
388 \cvCPyFunc{EndFindContours}
389 Finishes the scanning process.
392 CvSeq* cvEndFindContours( \par CvContourScanner* scanner );
396 \cvarg{scanner}{Pointer to the contour scanner}
399 The function finishes the scanning process and returns a pointer to the first contour on the highest level.
401 \cvCPyFunc{FindContours}
402 Finds the contours in a binary image.
405 int cvFindContours(\par CvArr* image,\par CvMemStorage* storage,\par CvSeq** first\_contour,\par
406 int header\_size=sizeof(CvContour),\par int mode=CV\_RETR\_LIST,\par
407 int method=CV\_CHAIN\_APPROX\_SIMPLE,\par CvPoint offset=cvPoint(0,0) );
408 }\cvdefPy{FindContours(image, storage, mode=CV\_RETR\_LIST, method=CV\_CHAIN\_APPROX\_SIMPLE, offset=(0,0)) -> cvseq}
411 \cvarg{image}{The source, an 8-bit single channel image. Non-zero pixels are treated as 1's, zero pixels remain 0's - the image is treated as \texttt{binary}. To get such a binary image from grayscale, one may use \cvCPyCross{Threshold}, \cvCPyCross{AdaptiveThreshold} or \cvCPyCross{Canny}. The function modifies the source image's content}
412 \cvarg{storage}{Container of the retrieved contours}
413 \cvarg{first\_contour}{Output parameter, will contain the pointer to the first outer contour}
414 \cvarg{header\_size}{Size of the sequence header, $\ge \texttt{sizeof(CvChain)}$ if $\texttt{method} =\texttt{CV\_CHAIN\_CODE}$,
415 and $\ge \texttt{sizeof(CvContour)}$ otherwise}
416 \cvarg{mode}{Retrieval mode
418 \cvarg{CV\_RETR\_EXTERNAL}{retrives only the extreme outer contours}
419 \cvarg{CV\_RETR\_LIST}{retrieves all of the contours and puts them in the list}
420 \cvarg{CV\_RETR\_CCOMP}{retrieves all of the contours and organizes them into a two-level hierarchy: on the top level are the external boundaries of the components, on the second level are the boundaries of the holes}
421 \cvarg{CV\_RETR\_TREE}{retrieves all of the contours and reconstructs the full hierarchy of nested contours}
423 \cvarg{method}{Approximation method (for all the modes, except \texttt{CV\_LINK\_RUNS}, which uses built-in approximation)
425 \cvarg{CV\_CHAIN\_CODE}{outputs contours in the Freeman chain code. All other methods output polygons (sequences of vertices)}
426 \cvarg{CV\_CHAIN\_APPROX\_NONE}{translates all of the points from the chain code into points}
427 \cvarg{CV\_CHAIN\_APPROX\_SIMPLE}{compresses horizontal, vertical, and diagonal segments and leaves only their end points}
428 \cvarg{CV\_CHAIN\_APPROX\_TC89\_L1,CV\_CHAIN\_APPROX\_TC89\_KCOS}{applies one of the flavors of the Teh-Chin chain approximation algorithm.}
429 \cvarg{CV\_LINK\_RUNS}{uses a completely different contour retrieval algorithm by linking horizontal segments of 1's. Only the \texttt{CV\_RETR\_LIST} retrieval mode can be used with this method.}
431 \cvarg{offset}{Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context}
434 The function retrieves contours from the
435 binary image and returns the number of retrieved contours. The
436 pointer \texttt{first\_contour} is filled by the function. It will
437 contain a pointer to the first outermost contour or \texttt{NULL} if no
438 contours are detected (if the image is completely black). Other
439 contours may be reached from \texttt{first\_contour} using the
440 \texttt{h\_next} and \texttt{v\_next} links. The sample in the
441 \cvCPyCross{DrawContours} discussion shows how to use contours for
442 connected component detection. Contours can be also used for shape
443 analysis and object recognition - see \texttt{squares.c} in the OpenCV
447 \cvCPyFunc{FindNextContour}
448 Finds the next contour in the image.
451 CvSeq* cvFindNextContour( \par CvContourScanner scanner );
455 \cvarg{scanner}{Contour scanner initialized by \cvCPyCross{StartFindContours} }
458 The function locates and retrieves the next contour in the image and returns a pointer to it. The function returns NULL if there are no more contours.
460 \cvCPyFunc{FitEllipse}
461 Fits an ellipse around a set of 2D points.
464 CvBox2D cvFitEllipse2( \par const CvArr* points );
465 }\cvdefPy{FitEllipse2(points)-> Box2D}
468 \cvarg{points}{Sequence or array of points}
471 The function calculates the ellipse that fits best
472 (in least-squares sense) around a set of 2D points. The meaning of the
473 returned structure fields is similar to those in \cvCPyCross{Ellipse} except
474 that \texttt{size} stores the full lengths of the ellipse axises,
478 Fits a line to a 2D or 3D point set.
481 void cvFitLine( \par const CvArr* points,\par int dist\_type,\par double param,\par double reps,\par double aeps,\par float* line );
482 }\cvdefPy{FitLine(points, dist\_type, param, reps, aeps) -> line}
485 \cvarg{points}{Sequence or array of 2D or 3D points with 32-bit integer or floating-point coordinates}
486 \cvarg{dist\_type}{The distance used for fitting (see the discussion)}
487 \cvarg{param}{Numerical parameter (\texttt{C}) for some types of distances, if 0 then some optimal value is chosen}
488 \cvarg{reps, aeps}{Sufficient accuracy for the radius (distance between the coordinate origin and the line) and angle, respectively; 0.01 would be a good default value for both.}
489 \cvarg{line}{The output line parameters. In the case of a 2d fitting,
490 it is \cvC{an array} \cvPy{a tuple} of 4 floats \texttt{(vx, vy,
491 x0, y0)} where \texttt{(vx, vy)} is a normalized vector collinear to the
492 line and \texttt{(x0, y0)} is some point on the line. in the case of a
493 3D fitting it is \cvC{an array} \cvPy{a tuple} of 6 floats \texttt{(vx, vy, vz, x0, y0, z0)}
494 where \texttt{(vx, vy, vz)} is a normalized vector collinear to the line
495 and \texttt{(x0, y0, z0)} is some point on the line}
498 The function fits a line to a 2D or 3D point set by minimizing $\sum_i \rho(r_i)$ where $r_i$ is the distance between the $i$ th point and the line and $\rho(r)$ is a distance function, one of:
502 \item[dist\_type=CV\_DIST\_L2]
503 \[ \rho(r) = r^2/2 \quad \text{(the simplest and the fastest least-squares method)} \]
505 \item[dist\_type=CV\_DIST\_L1]
508 \item[dist\_type=CV\_DIST\_L12]
509 \[ \rho(r) = 2 \cdot (\sqrt{1 + \frac{r^2}{2}} - 1) \]
511 \item[dist\_type=CV\_DIST\_FAIR]
512 \[ \rho\left(r\right) = C^2 \cdot \left( \frac{r}{C} - \log{\left(1 + \frac{r}{C}\right)}\right) \quad \text{where} \quad C=1.3998 \]
514 \item[dist\_type=CV\_DIST\_WELSCH]
515 \[ \rho\left(r\right) = \frac{C^2}{2} \cdot \left( 1 - \exp{\left(-\left(\frac{r}{C}\right)^2\right)}\right) \quad \text{where} \quad C=2.9846 \]
517 \item[dist\_type=CV\_DIST\_HUBER]
520 {C \cdot (r-C/2)}{otherwise} \quad \text{where} \quad C=1.345
524 \cvCPyFunc{GetCentralMoment}
525 Retrieves the central moment from the moment state structure.
528 double cvGetCentralMoment( \par CvMoments* moments,\par int x\_order,\par int y\_order );
529 }\cvdefPy{GetCentralMoment(cvmoments, x\_order, y\_order) -> double}
532 \cvarg{moments}{Pointer to the moment state structure}
533 \cvarg{x\_order}{x order of the retrieved moment, $\texttt{x\_order} >= 0$}
534 \cvarg{y\_order}{y order of the retrieved moment, $\texttt{y\_order} >= 0$ and $\texttt{x\_order} + \texttt{y\_order} <= 3$}
537 The function retrieves the central moment, which in the case of image moments is defined as:
540 \mu_{x\_order, \, y\_order} = \sum_{x,y} (I(x,y) \cdot (x-x_c)^{x\_order} \cdot (y-y_c)^{y\_order})
543 where $x_c,y_c$ are the coordinates of the gravity center:
546 x_c=\frac{M_{10}}{M_{00}}, y_c=\frac{M_{01}}{M_{00}}
549 \cvCPyFunc{GetNormalizedCentralMoment}
550 Retrieves the normalized central moment from the moment state structure.
553 double cvGetNormalizedCentralMoment( \par CvMoments* moments,\par int x\_order,\par int y\_order );
554 }\cvdefPy{GetNormalizedCentralMoment(cvmoments, x\_order, y\_order) -> double}
557 \cvarg{moments}{Pointer to the moment state structure}
558 \cvarg{x\_order}{x order of the retrieved moment, $\texttt{x\_order} >= 0$}
559 \cvarg{y\_order}{y order of the retrieved moment, $\texttt{y\_order} >= 0$ and $\texttt{x\_order} + \texttt{y\_order} <= 3$}
562 The function retrieves the normalized central moment:
565 \eta_{x\_order, \, y\_order} = \frac{\mu_{x\_order, \, y\_order}}{M_{00}^{(y\_order+x\_order)/2+1}}
568 \cvCPyFunc{GetSpatialMoment}
569 Retrieves the spatial moment from the moment state structure.
572 double cvGetSpatialMoment( \par CvMoments* moments, \par int x\_order, \par int y\_order );
573 }\cvdefPy{GetSpatialMoment(cvmoments, x\_order, y\_order) -> double}
576 \cvarg{moments}{The moment state, calculated by \cvCPyCross{Moments}}
577 \cvarg{x\_order}{x order of the retrieved moment, $\texttt{x\_order} >= 0$}
578 \cvarg{y\_order}{y order of the retrieved moment, $\texttt{y\_order} >= 0$ and $\texttt{x\_order} + \texttt{y\_order} <= 3$}
581 The function retrieves the spatial moment, which in the case of image moments is defined as:
584 M_{x\_order, \, y\_order} = \sum_{x,y} (I(x,y) \cdot x^{x\_order} \cdot y^{y\_order})
587 where $I(x,y)$ is the intensity of the pixel $(x, y)$.
589 \cvCPyFunc{MatchContourTrees}
590 Compares two contours using their tree representations.
593 double cvMatchContourTrees( \par const CvContourTree* tree1,\par const CvContourTree* tree2,\par int method,\par double threshold );
594 }\cvdefPy{MatchContourTrees(tree1,tree2,method,threshold)-> double}
597 \cvarg{tree1}{First contour tree}
598 \cvarg{tree2}{Second contour tree}
599 \cvarg{method}{Similarity measure, only \texttt{CV\_CONTOUR\_TREES\_MATCH\_I1} is supported}
600 \cvarg{threshold}{Similarity threshold}
603 The function calculates the value of the matching measure for two contour trees. The similarity measure is calculated level by level from the binary tree roots. If at a certain level the difference between contours becomes less than \texttt{threshold}, the reconstruction process is interrupted and the current difference is returned.
605 \cvCPyFunc{MatchShapes}
609 double cvMatchShapes( \par const void* object1,\par const void* object2,\par int method,\par double parameter=0 );
610 }\cvdefPy{MatchShapes(object1,object2,method,parameter=0)-> None}
613 \cvarg{object1}{First contour or grayscale image}
614 \cvarg{object2}{Second contour or grayscale image}
615 \cvarg{method}{Comparison method;
616 \texttt{CV\_CONTOUR\_MATCH\_I1},
617 \texttt{CV\_CONTOURS\_MATCH\_I2}
619 \texttt{CV\_CONTOURS\_MATCH\_I3}}
620 \cvarg{parameter}{Method-specific parameter (is not used now)}
623 The function compares two shapes. The 3 implemented methods all use Hu moments (see \cvCPyCross{GetHuMoments}) ($A$ is \texttt{object1}, $B$ is \texttt{object2}):
626 \item[method=CV\_CONTOUR\_MATCH\_I1]
627 \[ I_1(A,B) = \sum_{i=1...7} \left| \frac{1}{m^A_i} - \frac{1}{m^B_i} \right| \]
629 \item[method=CV\_CONTOUR\_MATCH\_I2]
630 \[ I_2(A,B) = \sum_{i=1...7} \left| m^A_i - m^B_i \right| \]
632 \item[method=CV\_CONTOUR\_MATCH\_I3]
633 \[ I_3(A,B) = \sum_{i=1...7} \frac{ \left| m^A_i - m^B_i \right| }{ \left| m^A_i \right| } \]
640 m^A_i = sign(h^A_i) \cdot \log{h^A_i}
641 m^B_i = sign(h^B_i) \cdot \log{h^B_i}
645 and $h^A_i, h^B_i$ are the Hu moments of $A$ and $B$ respectively.
648 \cvCPyFunc{MinAreaRect2}
649 Finds the circumscribed rectangle of minimal area for a given 2D point set.
652 CvBox2D cvMinAreaRect2( \par const CvArr* points,\par CvMemStorage* storage=NULL );
653 }\cvdefPy{MinAreaRect2(points,storage)-> CvBox2D}
656 \cvarg{points}{Sequence or array of points}
657 \cvarg{storage}{Optional temporary memory storage}
660 The function finds a circumscribed rectangle of the minimal area for a 2D point set by building a convex hull for the set and applying the rotating calipers technique to the hull.
662 \cvfunc{Picture. Minimal-area bounding rectangle for contour}
664 \includegraphics[width=0.5\textwidth]{pics/minareabox.png}
666 \cvCPyFunc{MinEnclosingCircle}
667 Finds the circumscribed circle of minimal area for a given 2D point set.
670 int cvMinEnclosingCircle( \par const CvArr* points,\par CvPoint2D32f* center,\par float* radius );
671 }\cvdefPy{MinEnclosingCircle(points)-> int,center,radius}
674 \cvarg{points}{Sequence or array of 2D points}
675 \cvarg{center}{Output parameter; the center of the enclosing circle}
676 \cvarg{radius}{Output parameter; the radius of the enclosing circle}
679 The function finds the minimal circumscribed
680 circle for a 2D point set using an iterative algorithm. It returns nonzero
681 if the resultant circle contains all the input points and zero otherwise
682 (i.e. the algorithm failed).
685 Calculates all of the moments up to the third order of a polygon or rasterized shape.
688 void cvMoments( \par const CvArr* arr,\par CvMoments* moments,\par int binary=0 );
689 }\cvdefPy{Moments(arr) -> cvmoments}
692 \cvarg{arr}{Image (1-channel or 3-channel with COI set) or polygon (CvSeq of points or a vector of points)}
693 \cvarg{moments}{Pointer to returned moment's state structure}
694 \cvarg{binary}{(For images only) If the flag is non-zero, all of the zero pixel values are treated as zeroes, and all of the others are treated as 1's}
697 The function calculates spatial and central moments up to the third order and writes them to \texttt{moments}. The moments may then be used then to calculate the gravity center of the shape, its area, main axises and various shape characeteristics including 7 Hu invariants.
699 \cvCPyFunc{PointPolygonTest}
700 Point in contour test.
703 double cvPointPolygonTest( \par const CvArr* contour,\par CvPoint2D32f pt,\par int measure\_dist );
704 }\cvdefPy{PointPolygonTest(contour,pt,measure\_dist)-> double}
707 \cvarg{contour}{Input contour}
708 \cvarg{pt}{The point tested against the contour}
709 \cvarg{measure\_dist}{If it is non-zero, the function estimates the distance from the point to the nearest contour edge}
712 The function determines whether the
713 point is inside a contour, outside, or lies on an edge (or coinsides
714 with a vertex). It returns positive, negative or zero value,
715 correspondingly. When $\texttt{measure\_dist} =0$, the return value
716 is +1, -1 and 0, respectively. When $\texttt{measure\_dist} \ne 0$,
717 it is a signed distance between the point and the nearest contour
720 Here is the sample output of the function, where each image pixel is tested against the contour.
722 \includegraphics[width=0.5\textwidth]{pics/pointpolygon.png}
726 \cvCPyFunc{PointSeqFromMat}
727 Initializes a point sequence header from a point vector.
730 CvSeq* cvPointSeqFromMat( \par int seq\_kind,\par const CvArr* mat,\par CvContour* contour\_header,\par CvSeqBlock* block );
734 \cvarg{seq\_kind}{Type of the point sequence: point set (0), a curve (\texttt{CV\_SEQ\_KIND\_CURVE}), closed curve (\texttt{CV\_SEQ\_KIND\_CURVE+CV\_SEQ\_FLAG\_CLOSED}) etc.}
735 \cvarg{mat}{Input matrix. It should be a continuous, 1-dimensional vector of points, that is, it should have type \texttt{CV\_32SC2} or \texttt{CV\_32FC2}}
736 \cvarg{contour\_header}{Contour header, initialized by the function}
737 \cvarg{block}{Sequence block header, initialized by the function}
740 The function initializes a sequence
741 header to create a "virtual" sequence in which elements reside in
742 the specified matrix. No data is copied. The initialized sequence
743 header may be passed to any function that takes a point sequence
744 on input. No extra elements can be added to the sequence,
745 but some may be removed. The function is a specialized variant of
746 \cvCPyCross{MakeSeqHeaderForArray} and uses
747 the latter internally. It returns a pointer to the initialized contour
748 header. Note that the bounding rectangle (field \texttt{rect} of
749 \texttt{CvContour} strucuture) is not initialized by the function. If
750 you need one, use \cvCPyCross{BoundingRect}.
752 Here is a simple usage example.
757 CvMat* vector = cvCreateMat( 1, 3, CV_32SC2 );
759 CV_MAT_ELEM( *vector, CvPoint, 0, 0 ) = cvPoint(100,100);
760 CV_MAT_ELEM( *vector, CvPoint, 0, 1 ) = cvPoint(100,200);
761 CV_MAT_ELEM( *vector, CvPoint, 0, 2 ) = cvPoint(200,100);
763 IplImage* img = cvCreateImage( cvSize(300,300), 8, 3 );
767 cvPointSeqFromMat(CV_SEQ_KIND_CURVE+CV_SEQ_FLAG_CLOSED,
773 0, 3, 8, cvPoint(0,0));
777 \cvCPyFunc{ReadChainPoint}
778 Gets the next chain point.
781 CvPoint cvReadChainPoint( CvChainPtReader* reader );
785 \cvarg{reader}{Chain reader state}
788 The function returns the current chain point and updates the reader position.
790 \cvCPyFunc{StartFindContours}
791 Initializes the contour scanning process.
794 CvContourScanner cvStartFindContours(\par CvArr* image,\par CvMemStorage* storage,\par
795 int header\_size=sizeof(CvContour),\par
796 int mode=CV\_RETR\_LIST,\par
797 int method=CV\_CHAIN\_APPROX\_SIMPLE,\par
798 CvPoint offset=cvPoint(0,\par0) );
802 \cvarg{image}{The 8-bit, single channel, binary source image}
803 \cvarg{storage}{Container of the retrieved contours}
804 \cvarg{header\_size}{Size of the sequence header, $>=sizeof(CvChain)$ if \texttt{method} =CV\_CHAIN\_CODE, and $>=sizeof(CvContour)$ otherwise}
805 \cvarg{mode}{Retrieval mode; see \cvCPyCross{FindContours}}
806 \cvarg{method}{Approximation method. It has the same meaning in \cvCPyCross{FindContours}, but \texttt{CV\_LINK\_RUNS} can not be used here}
807 \cvarg{offset}{ROI offset; see \cvCPyCross{FindContours}}
810 The function initializes and returns a pointer to the contour scanner. The scanner is used in \cvCPyCross{FindNextContour} to retrieve the rest of the contours.
812 \cvCPyFunc{StartReadChainPoints}
813 Initializes the chain reader.
816 void cvStartReadChainPoints( CvChain* chain, CvChainPtReader* reader );
819 The function initializes a special reader.
821 \cvCPyFunc{SubstituteContour}
822 Replaces a retrieved contour.
825 void cvSubstituteContour( \par CvContourScanner scanner, \par CvSeq* new\_contour );
829 \cvarg{scanner}{Contour scanner initialized by \cvCPyCross{StartFindContours} }
830 \cvarg{new\_contour}{Substituting contour}
833 The function replaces the retrieved
834 contour, that was returned from the preceding call of
835 \cvCPyCross{FindNextContour} and stored inside the contour scanner
836 state, with the user-specified contour. The contour is inserted
837 into the resulting structure, list, two-level hierarchy, or tree,
838 depending on the retrieval mode. If the parameter \texttt{new\_contour}
839 is \texttt{NULL}, the retrieved contour is not included in the
840 resulting structure, nor are any of its children that might be added
841 to this structure later.
851 Calculates all of the moments up to the third order of a polygon or rasterized shape.
853 \cvdefCpp{Moments moments( const Mat\& array, bool binaryImage=false );}
855 where \texttt{Moments} is defined as:
861 Moments(double m00, double m10, double m01, double m20, double m11,
862 double m02, double m30, double m21, double m12, double m03 );
863 Moments( const CvMoments\& moments );
864 operator CvMoments() const;
867 double m00, m10, m01, m20, m11, m02, m30, m21, m12, m03;
869 double mu20, mu11, mu02, mu30, mu21, mu12, mu03;
870 // central normalized moments
871 double nu20, nu11, nu02, nu30, nu21, nu12, nu03;
876 \cvarg{array}{A raster image (single-channel, 8-bit or floating-point 2D array) or an array
877 ($1 \times N$ or $N \times 1$) of 2D points (\texttt{Point} or \texttt{Point2f})}
878 \cvarg{binaryImage}{(For images only) If it is true, then all the non-zero image pixels are treated as 1's}
881 The function computes moments, up to the 3rd order, of a vector shape or a rasterized shape.
882 In case of a raster image, the spatial moments $\texttt{Moments::m}_{ji}$ are computed as:
884 \[\texttt{m}_{ji}=\sum_{x,y} \left(\texttt{array}(x,y) \cdot x^j \cdot y^i\right),\]
886 the central moments $\texttt{Moments::mu}_{ji}$ are computed as:
887 \[\texttt{mu}_{ji}=\sum_{x,y} \left(\texttt{array}(x,y) \cdot (x - \bar{x})^j \cdot (y - \bar{y})^i\right)\]
888 where $(\bar{x}, \bar{y})$ is the mass center:
891 \bar{x}=\frac{\texttt{m}_{10}}{\texttt{m}_{00}},\; \bar{y}=\frac{\texttt{m}_{01}}{\texttt{m}_{00}}
894 and the normalized central moments $\texttt{Moments::nu}_{ij}$ are computed as:
895 \[\texttt{nu}_{ji}=\frac{\texttt{mu}_{ji}}{\texttt{m}_{00}^{(i+j)/2+1}}.\]
897 Note that $\texttt{mu}_{00}=\texttt{m}_{00}$, $\texttt{nu}_{00}=1$ $\texttt{nu}_{10}=\texttt{mu}_{10}=\texttt{mu}_{01}=\texttt{mu}_{10}=0$, hence the values are not stored.
899 The moments of a contour are defined in the same way, but computed using Green's formula
900 (see \url{http://en.wikipedia.org/wiki/Green_theorem}), therefore, because of a limited raster resolution, the moments computed for a contour will be slightly different from the moments computed for the same contour rasterized.
902 See also: \cvCppCross{contourArea}, \cvCppCross{arcLength}
904 \cvCppFunc{HuMoments}
905 Calculates the seven Hu invariants.
907 \cvdefCpp{void HuMoments( const Moments\& moments, double hu[7] );}
909 \cvarg{moments}{The input moments, computed with \cvCppCross{moments}}
910 \cvarg{hu}{The output Hu invariants}
913 The function calculates the seven Hu invariants, see \url{http://en.wikipedia.org/wiki/Image_moment}, that are defined as:
916 h[0]=\eta_{20}+\eta_{02}\\
917 h[1]=(\eta_{20}-\eta_{02})^{2}+4\eta_{11}^{2}\\
918 h[2]=(\eta_{30}-3\eta_{12})^{2}+ (3\eta_{21}-\eta_{03})^{2}\\
919 h[3]=(\eta_{30}+\eta_{12})^{2}+ (\eta_{21}+\eta_{03})^{2}\\
920 h[4]=(\eta_{30}-3\eta_{12})(\eta_{30}+\eta_{12})[(\eta_{30}+\eta_{12})^{2}-3(\eta_{21}+\eta_{03})^{2}]+(3\eta_{21}-\eta_{03})(\eta_{21}+\eta_{03})[3(\eta_{30}+\eta_{12})^{2}-(\eta_{21}+\eta_{03})^{2}]\\
921 h[5]=(\eta_{20}-\eta_{02})[(\eta_{30}+\eta_{12})^{2}- (\eta_{21}+\eta_{03})^{2}]+4\eta_{11}(\eta_{30}+\eta_{12})(\eta_{21}+\eta_{03})\\
922 h[6]=(3\eta_{21}-\eta_{03})(\eta_{21}+\eta_{03})[3(\eta_{30}+\eta_{12})^{2}-(\eta_{21}+\eta_{03})^{2}]-(\eta_{30}-3\eta_{12})(\eta_{21}+\eta_{03})[3(\eta_{30}+\eta_{12})^{2}-(\eta_{21}+\eta_{03})^{2}]\\
926 where $\eta_{ji}$ stand for $\texttt{Moments::nu}_{ji}$.
928 These values are proved to be invariant to the image scale, rotation, and reflection except the seventh one, whose sign is changed by reflection. Of course, this invariance was proved with the assumption of infinite image resolution. In case of a raster images the computed Hu invariants for the original and transformed images will be a bit different.
930 See also: \cvCppCross{matchShapes}
932 \cvCppFunc{findContours}
933 Finds the contours in a binary image.
935 \cvdefCpp{void findContours( const Mat\& image, vector<vector<Point> >\& contours,\par
936 vector<Vec4i>\& hierarchy, int mode,\par
937 int method, Point offset=Point());\newline
938 void findContours( const Mat\& image, vector<vector<Point> >\& contours,\par
939 int mode, int method, Point offset=Point());
942 \cvarg{image}{The source, an 8-bit single-channel image. Non-zero pixels are treated as 1's, zero pixels remain 0's - the image is treated as \texttt{binary}. You can use \cvCppCross{compare}, \cvCppCross{inRange}, \cvCppCross{threshold}, \cvCppCross{adaptiveThreshold}, \cvCppCross{Canny} etc. to create a binary image out of a grayscale or color one. The function modifies the \texttt{image} while extracting the contours}
943 \cvarg{contours}{The detected contours. Each contour is stored as a vector of points}
944 \cvarg{hiararchy}{The optional output vector that will contain information about the image topology. It will have as many elements as the number of contours. For each contour \texttt{contours[i]}, the elements \texttt{hierarchy[i][0]}, \texttt{hiearchy[i][1]}, \texttt{hiearchy[i][2]}, \texttt{hiearchy[i][3]} will be set to 0-based indices in \texttt{contours} of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for some contour \texttt{i} there is no next, previous, parent or nested contours, the corresponding elements of \texttt{hierarchy[i]} will be negative}
945 \cvarg{mode}{The contour retrieval mode
947 \cvarg{RETR\_EXTERNAL}{retrieves only the extreme outer contours; It will set \texttt{hierarchy[i][2]=hierarchy[i][3]=-1} for all the contours}
948 \cvarg{RETR\_LIST}{retrieves all of the contours without establishing any hierarchical relationships}
949 \cvarg{RETR\_CCOMP}{retrieves all of the contours and organizes them into a two-level hierarchy: on the top level are the external boundaries of the components, on the second level are the boundaries of the holes. If inside a hole of a connected component there is another contour, it will still be put on the top level}
950 \cvarg{CV\_RETR\_TREE}{retrieves all of the contours and reconstructs the full hierarchy of nested contours. This full hierarchy is built and shown in OpenCV \texttt{contours.c} demo}
952 \cvarg{method}{The contour approximation method.
954 \cvarg{CV\_CHAIN\_APPROX\_NONE}{stores absolutely all the contour points. That is, every 2 points of a contour stored with this method are 8-connected neighbors of each other}
955 \cvarg{CV\_CHAIN\_APPROX\_SIMPLE}{compresses horizontal, vertical, and diagonal segments and leaves only their end points. E.g. an up-right rectangular contour will be encoded with 4 points}
956 \cvarg{CV\_CHAIN\_APPROX\_TC89\_L1,CV\_CHAIN\_APPROX\_TC89\_KCOS}{applies one of the flavors of the Teh-Chin chain approximation algorithm; see \cite{TehChin89}}
958 \cvarg{offset}{The optional offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context}
961 The function retrieves contours from the
962 binary image using the algorithm \cite{Suzuki85}. The contours are a useful tool for shape analysis and object detection and recognition. See \texttt{squares.c} in the OpenCV sample directory.
964 \cvCppFunc{drawContours}
965 Draws contours' outlines or filled contours.
967 \cvdefCpp{void drawContours( Mat\& image, const vector<vector<Point> >\& contours,\par
968 int contourIdx, const Scalar\& color, int thickness=1,\par
969 int lineType=8, const vector<Vec4i>\& hierarchy=vector<Vec4i>(),\par
970 int maxLevel=INT\_MAX, Point offset=Point() );}
972 \cvarg{image}{The destination image}
973 \cvarg{contours}{All the input contours. Each contour is stored as a point vector}
974 \cvarg{contourIdx}{Indicates the contour to draw. If it is negative, all the contours are drawn}
975 \cvarg{color}{The contours' color}
976 \cvarg{thickness}{Thickness of lines the contours are drawn with.
977 If it is negative (e.g. \texttt{thickness=CV\_FILLED}), the contour interiors are
979 \cvarg{lineType}{The line connectivity; see \cvCppCross{line} description}
980 \cvarg{hierarchy}{The optional information about hierarchy. It is only needed if you want to draw only some of the contours (see \texttt{maxLevel})}
981 \cvarg{maxLevel}{Maximal level for drawn contours. If 0, only
982 the specified contour is drawn. If 1, the function draws the contour(s) and all the nested contours. If 2, the function draws the contours, all the nested contours and all the nested into nested contours etc. This parameter is only taken into account when there is \texttt{hierarchy} available.}
983 \cvarg{offset}{The optional contour shift parameter. Shift all the drawn contours by the specified $\texttt{offset}=(dx,dy)$}
986 The function draws contour outlines in the image if $\texttt{thickness} \ge 0$ or fills the area bounded by the contours if $ \texttt{thickness}<0$. Here is the example on how to retrieve connected components from the binary image and label them
994 int main( int argc, char** argv )
997 // the first command line parameter must be file name of binary
998 // (black-n-white) image
999 if( argc != 2 || !(src=imread(argv[1], 0)).data)
1002 Mat dst = Mat::zeros(src.rows, src.cols, CV_8UC3);
1005 namedWindow( "Source", 1 );
1006 imshow( "Source", src );
1008 vector<vector<Point> > contours;
1009 vector<Vec4i> hierarchy;
1011 findContours( src, contours, hierarchy,
1012 CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
1014 // iterate through all the top-level contours,
1015 // draw each connected component with its own random color
1017 for( ; idx >= 0; idx = hiearchy[idx][0] )
1019 Scalar color( rand()&255, rand()&255, rand()&255 );
1020 drawContours( dst, contours, idx, color, CV_FILLED, 8, hiearchy );
1023 namedWindow( "Components", 1 );
1024 showImage( "Components", dst );
1030 \cvCppFunc{approxPolyDP}
1031 Approximates polygonal curve(s) with the specified precision.
1033 \cvdefCpp{void approxPolyDP( const Mat\& curve,\par
1034 vector<Point>\& approxCurve,\par
1035 double epsilon, bool closed );\newline
1036 void approxPolyDP( const Mat\& curve,\par
1037 vector<Point2f>\& approxCurve,\par
1038 double epsilon, bool closed );}
1040 \cvarg{curve}{The polygon or curve to approximate. Must be $1 \times N$ or $N \times 1$ matrix of type \texttt{CV\_32SC2} or \texttt{CV\_32FC2}. You can also pass \texttt{vector<Point>} or \texttt{vector<Point2f} that will be automatically converted to the matrix of the proper size and type}
1041 \cvarg{approxCurve}{The result of the approximation; The type should match the type of the input curve}
1042 \cvarg{epsilon}{Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation}
1043 \cvarg{closed}{If true, the approximated curve is closed (i.e. its first and last vertices are connected), otherwise it's not}
1046 The functions \texttt{approxPolyDP} approximate a curve or a polygon with another curve/polygon with less vertices, so that the distance between them is less or equal to the specified precision. It used Douglas-Peucker algorithm \url{http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm}
1048 \cvCppFunc{arcLength}
1049 Calculates a contour perimeter or a curve length.
1051 \cvdefCpp{double arcLength( const Mat\& curve, bool closed );}
1053 \cvarg{curve}{The input vector of 2D points, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1054 \cvarg{closed}{Indicates, whether the curve is closed or not}
1057 The function computes the curve length or the closed contour perimeter.
1059 \cvCppFunc{boundingRect}
1060 Calculates the up-right bounding rectangle of a point set.
1062 \cvdefCpp{Rect boundingRect( const Mat\& points );}
1064 \cvarg{points}{The input 2D point set, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1067 The function calculates and returns the minimal up-right bounding rectangle for the specified point set.
1070 \cvCppFunc{estimateRigidTransform}
1071 Computes optimal affine transformation between two 2D point sets
1073 \cvdefCpp{Mat estimateRigidTransform( const Mat\& srcpt, const Mat\& dstpt,\par
1076 \cvarg{srcpt}{The first input 2D point set}
1077 \cvarg{dst}{The second input 2D point set of the same size and the same type as \texttt{A}}
1078 \cvarg{fullAffine}{If true, the function finds the optimal affine transformation with no any additional resrictions (i.e. there are 6 degrees of freedom); otherwise, the class of transformations to choose from is limited to combinations of translation, rotation and uniform scaling (i.e. there are 5 degrees of freedom)}
1081 The function finds the optimal affine transform $[A|b]$ (a $2 \times 3$ floating-point matrix) that approximates best the transformation from $\texttt{srcpt}_i$ to $\texttt{dstpt}_i$:
1083 \[ [A^*|b^*] = arg \min_{[A|b]} \sum_i \|\texttt{dstpt}_i - A {\texttt{srcpt}_i}^T - b \|^2 \]
1085 where $[A|b]$ can be either arbitrary (when \texttt{fullAffine=true}) or have form
1086 \[\begin{bmatrix}a_{11} & a_{12} & b_1 \\ -a_{12} & a_{11} & b_2 \end{bmatrix}\] when \texttt{fullAffine=false}.
1088 See also: \cvCppCross{getAffineTransform}, \cvCppCross{getPerspectiveTransform}, \cvCppCross{findHomography}
1090 \cvCppFunc{estimateAffine3D}
1091 Computes optimal affine transformation between two 3D point sets
1093 \cvdefCpp{int estimateAffine3D(const Mat\& srcpt, const Mat\& dstpt, Mat\& out,\par
1094 vector<uchar>\& outliers,\par
1095 double ransacThreshold = 3.0,\par
1096 double confidence = 0.99);}
1098 \cvarg{srcpt}{The first input 3D point set}
1099 \cvarg{dstpt}{The second input 3D point set}
1100 \cvarg{out}{The output 3D affine transformation matrix $3 \times 4$}
1101 \cvarg{outliers}{The output vector indicating which points are outliers}
1102 \cvarg{ransacThreshold}{The maximum reprojection error in RANSAC algorithm to consider a point an inlier}
1103 \cvarg{confidence}{The confidence level, between 0 and 1, with which the matrix is estimated}
1106 The function estimates the optimal 3D affine transformation between two 3D point sets using RANSAC algorithm.
1109 \cvCppFunc{contourArea}
1110 Calculates the contour area
1112 \cvdefCpp{double contourArea( const Mat\& contour ); }
1114 \cvarg{contour}{The contour vertices, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1117 The function computes the contour area. Similarly to \cvCppCross{moments} the area is computed using the Green formula, thus the returned area and the number of non-zero pixels, if you draw the contour using \cvCppCross{drawContours} or \cvCppCross{fillPoly}, can be different.
1118 Here is a short example:
1121 vector<Point> contour;
1122 contour.push_back(Point2f(0, 0));
1123 contour.push_back(Point2f(10, 0));
1124 contour.push_back(Point2f(10, 10));
1125 contour.push_back(Point2f(5, 4));
1127 double area0 = contourArea(contour);
1128 vector<Point> approx;
1129 approxPolyDP(contour, approx, 5, true);
1130 double area1 = contourArea(approx);
1132 cout << "area0 =" << area0 << endl <<
1133 "area1 =" << area1 << endl <<
1134 "approx poly vertices" << approx.size() << endl;
1137 \cvCppFunc{convexHull}
1138 Finds the convex hull of a point set.
1140 \cvdefCpp{void convexHull( const Mat\& points, vector<int>\& hull,\par
1141 bool clockwise=false );\newline
1142 void convexHull( const Mat\& points, vector<Point>\& hull,\par
1143 bool clockwise=false );\newline
1144 void convexHull( const Mat\& points, vector<Point2f>\& hull,\par
1145 bool clockwise=false );}
1147 \cvarg{points}{The input 2D point set, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by
1148 \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1149 \cvarg{hull}{The output convex hull. It is either a vector of points that form the hull, or a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).}
1150 \cvarg{clockwise}{If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.}
1153 The functions find the convex hull of a 2D point set using Sklansky's algorithm \cite{Sklansky82} that has $O(N logN)$ or $O(N)$ complexity (where $N$ is the number of input points), depending on how the initial sorting is implemented (currently it is $O(N logN)$. See the OpenCV sample \texttt{convexhull.c} that demonstrates the use of the different function variants.
1155 \cvCppFunc{findHomography}
1156 Finds the optimal perspective transformation between two 2D point sets
1158 \cvdefCpp{Mat findHomography( const Mat\& srcPoints, const Mat\& dstPoints,\par
1159 Mat\& mask, int method=0,\par
1160 double ransacReprojThreshold=0 );\newline
1161 Mat findHomography( const Mat\& srcPoints, const Mat\& dstPoints,\par
1162 vector<uchar>\& mask, int method=0,\par
1163 double ransacReprojThreshold=0 );\newline
1164 Mat findHomography( const Mat\& srcPoints, const Mat\& dstPoints,\par
1165 int method=0, double ransacReprojThreshold=0 );}
1167 \cvarg{srcPoints}{Coordinates of the points in the original plane, a matrix of type \texttt{CV\_32FC2} or a \texttt{vector<Point2f>}.}
1168 \cvarg{dstPoints}{Coordinates of the points in the target plane, a matrix of type \texttt{CV\_32FC2} or a \texttt{vector<Point2f>}.}
1169 \cvarg{method}{The method used to compute the homography matrix; one of the following:
1171 \cvarg{0}{regular method using all the point pairs}
1172 \cvarg{RANSAC}{RANSAC-based robust method}
1173 \cvarg{LMEDS}{Least-Median robust method}
1175 \cvarg{ransacReprojThreshold}{The maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only). That is, if
1176 \[\|\texttt{dstPoints}_i - \texttt{convertPointHomogeneous}(\texttt{H} \texttt{srcPoints}_i)\| > \texttt{ransacReprojThreshold}\]
1177 then the point $i$ is considered an outlier. If \texttt{srcPoints} and \texttt{dstPoints} are measured in pixels, it usually makes sense to set this parameter somewhere in the range 1 to 10. }
1178 \cvarg{mask}{The optional output mask 8-bit single-channel matrix or a vector; will have as many elements as \texttt{srcPoints}. \texttt{mask[i]} is set to 0 if the point $i$ is outlier and 0 otherwise}
1181 The functions \texttt{findHomography} find and return the perspective transformation $H$ between the source and the destination planes:
1184 s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}
1187 So that the back-projection error
1191 \left( x'_i-\frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right)^2+
1192 \left( y'_i-\frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right)^2
1195 is minimized. If the parameter method is set to the default value 0, the function
1196 uses all the point pairs and estimates the best suitable homography
1197 matrix. However, if not all of the point pairs ($src\_points_i$,
1198 $dst\_points_i$) fit the rigid perspective transformation (i.e. there
1199 can be outliers), it is still possible to estimate the correct
1200 transformation using one of the robust methods available. Both
1201 methods, \texttt{RANSAC} and \texttt{LMEDS}, try many different random subsets
1202 of the corresponding point pairs (of 4 pairs each), estimate
1203 the homography matrix using this subset and a simple least-square
1204 algorithm and then compute the quality/goodness of the computed homography
1205 (which is the number of inliers for RANSAC or the median reprojection
1206 error for LMeDs). The best subset is then used to produce the initial
1207 estimate of the homography matrix and the mask of inliers/outliers.
1209 Regardless of the method, robust or not, the computed homography
1210 matrix is refined further (using inliers only in the case of a robust
1211 method) with the Levenberg-Marquardt method in order to reduce the
1212 reprojection error even more.
1214 The method \texttt{RANSAC} can handle practically any ratio of outliers,
1215 but it needs the threshold to distinguish inliers from outliers.
1216 The method \texttt{LMEDS} does not need any threshold, but it works
1217 correctly only when there are more than 50\% of inliers. Finally,
1218 if you are sure in the computed features and there can be only some
1219 small noise, but no outliers, the default method could be the best
1222 The function is used to find initial intrinsic and extrinsic matrices.
1223 Homography matrix is determined up to a scale, thus it is normalized
1224 to make $h_{33} =1$.
1226 See also: \cvCppCross{getAffineTransform}, \cvCppCross{getPerspectiveTransform}, \cvCppCross{estimateRigidMotion},
1227 \cvCppCross{warpPerspective}
1230 \cvCppFunc{fitEllipse}
1231 Fits an ellipse around a set of 2D points.
1233 \cvdefCpp{RotatedRect fitEllipse( const Mat\& points );}
1235 \cvarg{points}{The input 2D point set, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by
1236 \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1239 The function calculates the ellipse that fits best
1240 (in least-squares sense) a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed.
1243 Fits a line to a 2D or 3D point set.
1245 \cvdefCpp{void fitLine( const Mat\& points, Vec4f\& line, int distType,\par
1246 double param, double reps, double aeps );\newline
1247 void fitLine( const Mat\& points, Vec6f\& line, int distType,\par
1248 double param, double reps, double aeps );}
1250 \cvarg{points}{The input 2D point set, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by
1251 \texttt{vector<Point>}, \texttt{vector<Point2f>}, \texttt{vector<Point3i>} or \texttt{vector<Point3f>}}
1252 \cvarg{line}{The output line parameters. In the case of a 2d fitting,
1253 it is a vector of 4 floats \texttt{(vx, vy,
1254 x0, y0)} where \texttt{(vx, vy)} is a normalized vector collinear to the
1255 line and \texttt{(x0, y0)} is some point on the line. in the case of a
1256 3D fitting it is vector of 6 floats \texttt{(vx, vy, vz, x0, y0, z0)}
1257 where \texttt{(vx, vy, vz)} is a normalized vector collinear to the line
1258 and \texttt{(x0, y0, z0)} is some point on the line}
1259 \cvarg{distType}{The distance used by the M-estimator (see the discussion)}
1260 \cvarg{param}{Numerical parameter (\texttt{C}) for some types of distances, if 0 then some optimal value is chosen}
1261 \cvarg{reps, aeps}{Sufficient accuracy for the radius (distance between the coordinate origin and the line) and angle, respectively; 0.01 would be a good default value for both.}
1264 The functions \texttt{fitLine} fit a line to a 2D or 3D point set by minimizing $\sum_i \rho(r_i)$ where $r_i$ is the distance between the $i^{th}$ point and the line and $\rho(r)$ is a distance function, one of:
1267 \item[distType=CV\_DIST\_L2]
1268 \[ \rho(r) = r^2/2 \quad \text{(the simplest and the fastest least-squares method)} \]
1270 \item[distType=CV\_DIST\_L1]
1273 \item[distType=CV\_DIST\_L12]
1274 \[ \rho(r) = 2 \cdot (\sqrt{1 + \frac{r^2}{2}} - 1) \]
1276 \item[distType=CV\_DIST\_FAIR]
1277 \[ \rho\left(r\right) = C^2 \cdot \left( \frac{r}{C} - \log{\left(1 + \frac{r}{C}\right)}\right) \quad \text{where} \quad C=1.3998 \]
1279 \item[distType=CV\_DIST\_WELSCH]
1280 \[ \rho\left(r\right) = \frac{C^2}{2} \cdot \left( 1 - \exp{\left(-\left(\frac{r}{C}\right)^2\right)}\right) \quad \text{where} \quad C=2.9846 \]
1282 \item[distType=CV\_DIST\_HUBER]
1285 {C \cdot (r-C/2)}{otherwise} \quad \text{where} \quad C=1.345
1289 The algorithm is based on the M-estimator (\url{http://en.wikipedia.org/wiki/M-estimator}) technique, that iteratively fits the line using weighted least-squares algorithm and after each iteration the weights $w_i$ are adjusted to beinversely proportional to $\rho(r_i)$.
1292 \cvCppFunc{isContourConvex}
1293 Tests contour convexity.
1295 \cvdefCpp{bool isContourConvex( const Mat\& contour );}
1297 \cvarg{contour}{The tested contour, a matrix of type \texttt{CV\_32SC2} or \texttt{CV\_32FC2}, or \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1300 The function tests whether the input contour is convex or not. The contour must be simple, i.e. without self-intersections, otherwise the function output is undefined.
1303 \cvCppFunc{minAreaRect}
1304 Finds the minimum area rotated rectangle enclosing a 2D point set.
1306 \cvdefCpp{RotatedRect minAreaRect( const Mat\& points );}
1308 \cvarg{points}{The input 2D point set, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1311 The function calculates and returns the minimum area bounding rectangle (possibly rotated) for the specified point set. See the OpenCV sample \texttt{minarea.c}
1313 \cvCppFunc{minEnclosingCircle}
1314 Finds the minimum area circle enclosing a 2D point set.
1316 \cvdefCpp{void minEnclosingCircle( const Mat\& points, Point2f\& center, float\& radius ); }
1318 \cvarg{points}{The input 2D point set, represented by \texttt{CV\_32SC2} or \texttt{CV\_32FC2} matrix or by \texttt{vector<Point>} or \texttt{vector<Point2f>}}
1319 \cvarg{center}{The output center of the circle}
1320 \cvarg{radius}{The output radius of the circle}
1323 The function finds the minimal enclosing circle of a 2D point set using iterative algorithm. See the OpenCV sample \texttt{minarea.c}
1325 \cvCppFunc{matchShapes}
1326 Compares two shapes.
1328 \cvdefCpp{double matchShapes( const Mat\& object1,\par
1329 const Mat\& object2,\par
1330 int method, double parameter=0 );}
1332 \cvarg{object1}{The first contour or grayscale image}
1333 \cvarg{object2}{The second contour or grayscale image}
1334 \cvarg{method}{Comparison method:
1335 \texttt{CV\_CONTOUR\_MATCH\_I1},\\
1336 \texttt{CV\_CONTOURS\_MATCH\_I2}\\
1338 \texttt{CV\_CONTOURS\_MATCH\_I3} (see the discussion below)}
1339 \cvarg{parameter}{Method-specific parameter (is not used now)}
1342 The function compares two shapes. The 3 implemented methods all use Hu invariants (see \cvCppCross{HuMoments}) as following ($A$ denotes \texttt{object1}, $B$ denotes \texttt{object2}):
1345 \item[method=CV\_CONTOUR\_MATCH\_I1]
1346 \[ I_1(A,B) = \sum_{i=1...7} \left| \frac{1}{m^A_i} - \frac{1}{m^B_i} \right| \]
1348 \item[method=CV\_CONTOUR\_MATCH\_I2]
1349 \[ I_2(A,B) = \sum_{i=1...7} \left| m^A_i - m^B_i \right| \]
1351 \item[method=CV\_CONTOUR\_MATCH\_I3]
1352 \[ I_3(A,B) = \sum_{i=1...7} \frac{ \left| m^A_i - m^B_i \right| }{ \left| m^A_i \right| } \]
1359 m^A_i = \mathrm{sign}(h^A_i) \cdot \log{h^A_i} \\
1360 m^B_i = \mathrm{sign}(h^B_i) \cdot \log{h^B_i}
1364 and $h^A_i, h^B_i$ are the Hu moments of $A$ and $B$ respectively.
1367 \cvCppFunc{pointPolygonTest}
1368 Performs point-in-contour test.
1370 \cvdefCpp{double pointPolygonTest( const Mat\& contour,\par
1371 Point2f pt, bool measureDist );}
1373 \cvarg{contour}{The input contour}
1374 \cvarg{pt}{The point tested against the contour}
1375 \cvarg{measureDist}{If true, the function estimates the signed distance from the point to the nearest contour edge; otherwise, the function only checks if the point is inside or not.}
1378 The function determines whether the
1379 point is inside a contour, outside, or lies on an edge (or coincides
1380 with a vertex). It returns positive (inside), negative (outside) or zero (on an edge) value,
1381 correspondingly. When \texttt{measureDist=false}, the return value
1382 is +1, -1 and 0, respectively. Otherwise, the return value
1383 it is a signed distance between the point and the nearest contour
1386 Here is the sample output of the function, where each image pixel is tested against the contour.
1388 \includegraphics[width=0.5\textwidth]{pics/pointpolygon.png}