Fast CV Tutorial

To setup FastCV in the environment, the necessary files were included under /jni/* under the git repository. Those files required for the compilation includes fastcv.h, libfastcv.a, stdint_.h, and fastcv.inl. Android.mk should automatically move them to the out directory under source except for libfastcv.a, which must be copied to out/target/product/msm8960/obj/STATIC_LIBRARIES/libfastcv_intermediates/. Run mm to make sure that the application is able to find all the necessary dependencies.

In your init functions, be sure to include the function below to take advantage of the hardware acceleration or else the FastCV algorithms will significantly slow down your video stream.

fcvSetOperationMode( (fcvOperationMode) FASTCV_OP_PERFORMANCE );

For any memory that you allocate for FastCV operations you will need to 128-bit allign your memory. To do this, make sure that you allocate and de-allocate your memory using the following methods.

void* buffer = fcvMemAlloc( height*width*4, 16);
fcvMemFree( buffer);

One of the most fundamental functions of FastCV is colour space manipulation. If you want to display the image on the screen as a bitmap, you will need to convert the input format from the camera to RGBA8888 interleaved. This is a good website for reference on colour space information http://www.fourcc.org/yuv.php. For the Creative Live Cam, the input colour space was YUV 4:2:2 with YUY2(YUYV) ordering, and the following function call was used to convert it to the required RGBA.

// The following code will extract the uv and the y from the YUY2 ordering stream
void* y = fcvMemAlloc( height*width, 16);
void* uv = fcvMemAlloc( height*width, 16);

for (unsigned int i = 0; i < width*height*2; i++) {
    if (i%2 == 0)
        ((char*)y)[i/2] = ((char*)buffer)[i];
    else
        ((char*)uv)[i/2] = ((char*)buffer)[i]; // interleaved u and v
}

fcvColorYCbCr422PseudoPlanarToRGBA8888u8((uint8_t*)buffer, (uint8_t*)uv, width, height, 0, 0, (uint8_t*)buffer, 0);

The following snippets of code contains the image processing capabilities of FastCV. The first thing that we will look at is the pre-processing. The image might contain lots of noise, and will give poor results, so typically what is done is a Gaussian filter to smooth the image out, and a down sample if the image is too rich. Hence another reason why I have decided to read in frames at 320x240p, because we really don’t need all of that detail, and it just makes the processing algorithms run slower. In my example code, I have further down sampled by to, resulting in a 160x120p image, and then ran a 5×5 pixel Gaussian filter. This means it will smooth out a pixel over the two neighboring pixels on every side.

int scaledWidth = width/2;
int scaledHeight = height/2;

uint8_t* scaledBuf = (uint8_t*)fcvMemAlloc( scaledWidth*scaledHeight, 16);
uint8_t* filteredBuf = (uint8_t*)fcvMemAlloc( scaledWidth*scaledHeight, 16);

// scale down the image and then apply 5x5 gaussian filter
fcvScaleDownBy2u8( (uint8_t*)y, width, height, scaledBuf );
fcvFilterGaussian5x5u8( scaledBuf, scaledWidth, scaledHeight, filteredBuf, 1 );

The next step is to run the image processing algorithms. The first feature we will take a look at is FastCV’s corner detection. This is provided as part of the sample code when you download the FastCV library, but we will go through it anyways. This algorithm takes in a grey scale image and is useful for detecting the corners of an enclosed room or identifying a box. It is pretty straight forward, just make sure that you set up the memory allocations correctly. The function will return a struct of uint32_t x and uint32_t y. To take a look at the results a got, look under the Video tab.

uint32_t numCorners, maxCorners = 1000;
uint32_t* corners = (uint32_t*)fcvMemAlloc( maxCorners*4*2, 16);

fcvCornerFast9u8 ( (uint8_t*)y, width, height, 0, 25, 0, corners, maxCorners, &numCorners);

// The following will display the detected corners in red on the screen
for( unsigned int j = 0; j < numCorners; j+=2) {
    uint32_t pIdx = corners[j] * bpp + corners[j+1] * width * bpp;
    if ((int)pIdx < width*height*4) {
        rgba[pIdx] = 255;
        rgba[pIdx + 1] = 0;
        rgba[pIdx + 2] = 0;
        rgba[pIdx + 3] = 255;
    }
}

fcvMemFree( corners);

The next thing that we will look at is Hough’s line detections. This will take a grey scale image and then look for any lines in the image. This is useful for line following robots. But in any case where the line will curve, this algorithm will fail to line follow. Once you have everything setup as shown below, the function will return the start and end positions in the fcvLine datatype. Because they only give you the start and end positions, Brensenham’s Occupancy Grid algorithm is used to draw the line from start to end to be displayed on the screen.

uint32_t maxLines = 15, numLines;
fcvLine detLines[maxLines];
int maxPnts = 500, numPnts;
Point points[maxPnts];

fcvHoughLineu8 ( (uint8_t*)y, width, height, width, 0.25, maxLines, &numLines, detLines);

// The following will display the detected lines in red on the screen
for (unsigned int i = 0; i < numLines; i++) {
    numPnts = bresenham( points, detLines[i].start.x, detLines[i].start.y, detLines[i].end.x, detLines[i].end.y, maxPnts );
    for( int j = 0; j < numPnts; j++) {         int pIdx = points[j].x * bpp + points[j].y * width * bpp;         rgba[pIdx] = 255;         rgba[pIdx + 1] = 0;         rgba[pIdx + 2] = 0;         rgba[pIdx + 3] = 255;     } } 

Brensenham’s Algorithm is as follows.

 static int bresenham( Point* points, double x1, double y1, double x2, double y2, int max) {     Point pnt1( round( x1), round( y1));     Point pnt2( round( x2), round( y2));     // make sure that the line is not steep     bool steep = (abs(pnt2.y - pnt1.y) > abs(pnt2.x - pnt1.x));
    if (steep) {
        swap( pnt1.x, pnt1.y);
        swap( pnt2.x, pnt2.y);
    }

    // make sure that the line only goes right -> left
    if ( pnt1.x > pnt2.x) {
        swap( pnt1.x, pnt2.x);
        swap( pnt1.y, pnt2.y);
    }

    int dx = abs(pnt2.x - pnt1.x);
    int dy = abs(pnt2.y - pnt1.y);

    int error = dx >> 1;
    int ystep = (pnt1.y < pnt2.y) ? 1 : -1;        // 1 if going up, -1 if down

    int y = pnt1.y;
    int curPnt = 0;
    for (int x = pnt1.x; x < pnt2.x; x++) {
        if ( steep) points[curPnt] = Point(y, x);
        else         points[curPnt] = Point(x, y);
        curPnt++;
        if (curPnt == max)
            return curPnt;

        error -= dy;
        if (error < 0) {
         y += ystep;
         error += dx;
        }
    }
    return curPnt;
}

One of the more interesting algorithms that I find is the Canny Edge detector. This algorithm takes in a grey scale image and is useful for tracing lines on the ground or object identification. It is more versatile than Hough’s line detection algorithm, but it will not give a solid line. Hence if your feedback loop requires a solid line to calculate the angle errors, then use the output of the Canny Edge detector and then feed it into your Hough’s line detection.

uint8_t* cannyBuf = (uint8_t*)fcvMemAlloc(scaledWidth * scaledHeight, 16);
fcvFilterCanny3x3u8 ( filteredBuf, scaledWidth, scaledHeight, cannyBuf, 12, 14);

// instead of displaying on screen, I decided to dump this one to a file
write(dump_fd, cannyBuf, scaledWidth * scaledHeight);

fcvMemFree (cannyBuf);

The output of the Canny edge detector is shown in the Figure below.
Canny Results
Based on this box that was filmed.
Box with Line