The function is similar to \cvCPyCross{CornerEigenValsAndVecs} but it calculates and stores only the minimal eigen value of derivative covariation matrix for every pixel, i.e. $min(\lambda_1, \lambda_2)$ in terms of the previous function.
+\ifPy
+\cvclass{CvSURFPoint}
+A SURF keypoint, represented as a tuple \texttt{((x, y), laplacian, size, dir, hessian)}.
+
+\begin{description}
+\cvarg{x}{x-coordinate of the feature within the image}
+\cvarg{y}{y-coordinate of the feature within the image}
+\cvarg{laplacian}{-1, 0 or +1. sign of the laplacian at the point. Can be used to speedup feature comparison since features with laplacians of different signs can not match}
+\cvarg{size}{size of the feature}
+\cvarg{dir}{orientation of the feature: 0..360 degrees}
+\cvarg{hessian}{value of the hessian (can be used to approximately estimate the feature strengths; see also params.hessianThreshold)}
+\end{description}
+\fi
+
\cvCPyFunc{ExtractSURF}
Extracts Speeded Up Robust Features from an image.
\begin{description}
\cvarg{image}{The input 8-bit grayscale image}
\cvarg{mask}{The optional input 8-bit mask. The features are only found in the areas that contain more than 50\% of non-zero mask pixels}
+\ifC
\cvarg{keypoints}{The output parameter; double pointer to the sequence of keypoints. The sequence of CvSURFPoint structures is as follows:}
\begin{lstlisting}
typedef struct CvSURFPoint
CvSURFPoint;
\end{lstlisting}
\cvarg{descriptors}{The optional output parameter; double pointer to the sequence of descriptors. Depending on the params.extended value, each element of the sequence will be either a 64-element or a 128-element floating-point (\texttt{CV\_32F}) vector. If the parameter is NULL, the descriptors are not computed}
+\else
+\cvarg{keypoints}{sequence of keypoints.}
+\cvarg{descriptors}{sequence of descriptors. Each SURF descriptor is a list of floats, of length 64 or 128.}
+\fi
\cvarg{storage}{Memory storage where keypoints and descriptors will be stored}
+\ifC
\cvarg{params}{Various algorithm parameters put to the structure CvSURFParams:}
\begin{lstlisting}
typedef struct CvSURFParams
CvSURFParams cvSURFParams(double hessianThreshold, int extended=0);
// returns default parameters
\end{lstlisting}
+\else
+\cvarg{params}{Various algorithm parameters in a tuple \texttt{(extended, hessianThreshold, nOctaves, nOctaveLayers)}:
+\begin{description}
+\cvarg{extended}{0 means basic descriptors (64 elements each), 1 means extended descriptors (128 elements each)}
+\cvarg{hessianThreshold}{only features with hessian larger than that are extracted. good default value is ~300-500 (can depend on the average local contrast and sharpness of the image). user can further filter out some features based on their hessian values and other characteristics.}
+\cvarg{nOctaves}{the number of octaves to be used for extraction. With each next octave the feature size is doubled (3 by default)}
+\cvarg{nOctaveLayers}{The number of layers within each octave (4 by default)}
+\end{description}}
+\fi
\end{description}
The function cvExtractSURF finds robust features in the image, as
-described in
-Bay06
-. For each feature it returns its location, size,
+described in \cite{Bay06}. For each feature it returns its location, size,
orientation and optionally the descriptor, basic or extended. The function
-can be used for object tracking and localization, image stitching etc. See the
+can be used for object tracking and localization, image stitching etc.
+
+\ifC
+See the
\texttt{find\_obj.cpp} demo in OpenCV samples directory.
+\else
+To extract strong SURF features from an image
+
+\begin{lstlisting}
+>>> import cv
+>>> im = cv.LoadImageM("building.jpg", cv.CV_LOAD_IMAGE_GRAYSCALE)
+>>> (keypoints, descriptors) = cv.ExtractSURF(im, None, cv.CreateMemStorage(), (0, 30000, 3, 1))
+>>> print len(keypoints), len(descriptors)
+6 6
+>>> for ((x, y), laplacian, size, dir, hessian) in keypoints:
+... print "x=\%d y=\%d laplacian=\%d size=\%d dir=\%f hessian=\%f" \% (x, y, laplacian, size, dir, hessian)
+x=30 y=27 laplacian=-1 size=31 dir=69.778503 hessian=36979.789062
+x=296 y=197 laplacian=1 size=33 dir=111.081039 hessian=31514.349609
+x=296 y=266 laplacian=1 size=32 dir=107.092300 hessian=31477.908203
+x=254 y=284 laplacian=1 size=31 dir=279.137360 hessian=34169.800781
+x=498 y=525 laplacian=-1 size=33 dir=278.006592 hessian=31002.759766
+x=777 y=281 laplacian=1 size=70 dir=167.940964 hessian=35538.363281
+\end{lstlisting}
+
+\fi
\cvCPyFunc{FindCornerSubPix}
Refines the corner locations.
\cvdefC{
CvSeq* cvGetStarKeypoints( \par const CvArr* image,\par CvMemStorage* storage,\par CvStarDetectorParams params=cvStarDetectorParams() );
-}\cvdefPy{GetStarKeypoints(image,storage,params)-> keypoints}
+}
+\cvdefPy{GetStarKeypoints(image,storage,params)-> keypoints}
\begin{description}
\cvarg{image}{The input 8-bit grayscale image}
\cvarg{storage}{Memory storage where the keypoints will be stored}
+\ifC
\cvarg{params}{Various algorithm parameters given to the structure CvStarDetectorParams:}
\begin{lstlisting}
typedef struct CvStarDetectorParams
}
CvStarDetectorParams;
\end{lstlisting}
+\else
+\cvarg{params}{Various algorithm parameters in a tuple \texttt{(maxSize, responseThreshold, lineThresholdProjected, lineThresholdBinarized, suppressNonmaxSize)}:
+\begin{description}
+\cvarg{maxSize}{maximal size of the features detected. The following values of the parameter are supported: 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128}
+\cvarg{responseThreshold}{threshold for the approximatd laplacian, used to eliminate weak features}
+\cvarg{lineThresholdProjected}{another threshold for laplacian to eliminate edges}
+\cvarg{lineThresholdBinarized}{another threshold for the feature scale to eliminate edges}
+\cvarg{suppressNonmaxSize}{linear size of a pixel neighborhood for non-maxima suppression}
+\end{description}
+}
+\fi
\end{description}
The function GetStarKeypoints extracts keypoints that are local
of a square, hexagon or octagon it uses an 8-end star shape, hence the name,
consisting of overlapping upright and tilted squares.
+\ifC
Each computed feature is represented by the following structure:
\begin{lstlisting}
inline CvStarKeypoint cvStarKeypoint(CvPoint pt, int size, float response);
\end{lstlisting}
+\else
+Each keypoint is represented by a tuple \texttt{((x, y), size, response)}:
+\begin{description}
+\cvarg{x, y}{Screen coordinates of the keypoint}
+\cvarg{size}{feature size, up to \texttt{maxSize}}
+\cvarg{response}{approximated laplacian value for the keypoint}
+\end{description}
+\fi
\ifC
Below is the small usage sample:
\cvarg{tempImage}{Another temporary image, the same size and format as \texttt{eigImage}}
\ifC
\cvarg{corners}{Output parameter; detected corners}
-\fi
\cvarg{cornerCount}{Output parameter; number of detected corners}
+\else
+\cvarg{cornerCount}{number of corners to detect}
+\fi
\cvarg{qualityLevel}{Multiplier for the max/min eigenvalue; specifies the minimal accepted quality of image corners}
\cvarg{minDistance}{Limit, specifying the minimum possible distance between the returned corners; Euclidian distance is used}
\cvarg{mask}{Region of interest. The function selects points either in the specified region or in the whole image if the mask is NULL}
Finds lines in a binary image using a Hough transform.
\cvdefC{
-CvSeq* cvHoughLines2( \par CvArr* image,\par void* line\_storage,\par int method,\par double rho,\par double theta,\par int threshold,\par double param1=0,\par double param2=0 );
+CvSeq* cvHoughLines2( \par CvArr* image,\par void* storage,\par int method,\par double rho,\par double theta,\par int threshold,\par double param1=0,\par double param2=0 );
}
-\cvdefPy{HoughLines2(image,line\_storage,method,rho,theta,threshold,param1=0,param2=0)-> lines}
+\cvdefPy{HoughLines2(image,storage,method,rho,theta,threshold,param1=0,param2=0)-> lines}
\begin{description}
\cvarg{image}{The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function}
-\cvarg{line\_storage}{The storage for the lines that are detected. It can
+\cvarg{storage}{The storage for the lines that are detected. It can
be a memory storage (in this case a sequence of lines is created in
the storage and returned by the function) or single row/single column
matrix (CvMat*) of a particular type (see below) to which the lines'
parameters are written. The matrix header is modified by the function
so its \texttt{cols} or \texttt{rows} will contain the number of lines
-detected. If \texttt{line\_storage} is a matrix and the actual number
+detected. If \texttt{storage} is a matrix and the actual number
of lines exceeds the matrix size, the maximum possible number of lines
is returned (in the case of standard hough transform the lines are sorted
by the accumulator value)}
The corners can be found as local maximums of the function below:
+\ifC
\begin{lstlisting}
// assume that the image is floating-point
IplImage* corners = cvCloneImage(image);
cvReleaseImage( &dilated_corners );
\end{lstlisting}
+\else
+\lstinputlisting{python_fragments/precornerdetect.py}
+\fi
+
\ifC
\cvCPyFunc{SampleLine}
Reads the raster line to the buffer.
return -1;
cvtColor(img, gray, CV_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
- GaussianBlur( gray, gray, 9, 9, 2, 2 );
+ GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
- houghCircles(gray, circles, CV_HOUGH_GRADIENT,
+ HoughCircles(gray, circles, CV_HOUGH_GRADIENT,
2, gray->rows/4, 200, 100 );
for( size_t i = 0; i < circles.size(); i++ )
{
\end{lstlisting}
-\cvCppFunc{KeyPoint}
+\cvclass{KeyPoint}
Data structure for salient point detectors
\begin{lstlisting}
\end{lstlisting}
-\cvCppFunc{MSER}
+\cvclass{MSER}
Maximally-Stable Extremal Region Extractor
\begin{lstlisting}
// runs the extractor on the specified image; returns the MSERs,
// each encoded as a contour (vector<Point>, see findContours)
// the optional mask marks the area where MSERs are searched for
- void operator()(Mat& image, vector<vector<Point> >& msers, const Mat& mask) const;
+ void operator()( const Mat& image, vector<vector<Point> >& msers, const Mat& mask ) const;
};
\end{lstlisting}
The class encapsulates all the parameters of MSER (see \url{http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions}) extraction algorithm.
-\cvCppFunc{SURF}
+\cvclass{SURF}
Class for extracting Speeded Up Robust Features from an image.
\begin{lstlisting}
\texttt{find\_obj.cpp} demo in OpenCV samples directory.
-\cvCppFunc{StarDetector}
+\cvclass{StarDetector}
Implements Star keypoint detector
\begin{lstlisting}