Article From:https://www.cnblogs.com/python-frog/p/9218165.html

## First, this section of knowledge Preview

1、  How to traverse every pixel of the image?

2、  opencvHow can the image matrix be stored?

3、  How do we measure the performance of our algorithms?

4、  What are the lookup tables and why do they use them?

## Two. What is look-up table and why do we need to use them?

Assuming a three channel RGB image, each pixel channel has 256 different color values, then one pixel may have 256*256*256 (about 16000000) possible color values, which is quite expensive for practical computing. In the actual calculation, only a small amount of color is needed.The color value can achieve the same effect. One commonly used method is to reduce the color space. With the following method, we can reduce the color space by 10 times.

However, if we use a formula to reduce the value of color space for each pixel, the cost is still very large. Therefore, we introduce a new method: look-up table.

`	//Definition look-up tableUchar table[256];Int divideWidth = 10;For (int i = 0; I < 256; ++i){Table[i] = (UCHAR) (divideWidth* (i/divideWidth));}`

divideWithIt can be simply understood as a multiple of reduced values, such as a value of 10 and a color value from 256 to 25. A single pixel is only 25*25*25 (15625) possible. Compared with the previous about 16000000, the amount of computation is greatly reduced. And then the value of a certain channel in a pixel,As an array index of look-up table, we can get the last color value directly, and avoid the workload of mathematical calculation.

## Three. How do we measure the performance of our algorithm?

opencvIn the case, we need to often measure the time of an interface / algorithm by using Opencv two functions cv:: getTickCount () and cv:: getTickFrequency (), the former records the CPU count from the start of the system.The number of times that the latter record CPU count frequency can be measured by the following code:

```double t = (double)getTickCount();

// do something ...

t = ((double)getTickCount() - t)/getTickFrequency();

cout << "Times passed in seconds: " << t << endl;```

## Four, how can the opencv image matrix be stored?

Let’s look back at the previous question on how images are stored in memory. Suppose our image is a grayscale image of n*m, and the way of storage in memory will be like this.

If the image is a RGB multichannel image, the actual storage in memory is like this:

It can be noted that the channel order is BGR instead of the original RGB. In addition, because our memory is large enough, our matrix can be stored continuously from one line to another, so that the speed of image scanning can be accelerated and the image is confirmed by the cv:: Mat:: isContinuous () functionWhether or not it is stored continuously.

## Five. How to traverse each pixel of the image?

When it comes to performance, nothing can be more efficient than C – style [] array access operations, so it is possible to implement table lookup methods to reduce color space values in an efficient way as follows:

```Mat& ScanImageAndReduceC(Mat& I,const uchar* const table)
{
//accept only char type matrices
CV_Assert(I.depth() == CV_8U);
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols*channels;
if(I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}

int i,j;
uchar *p;
for ( i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for(j = 0;j < nCols;++j)
{
p[j] = table[p[j]];
}
}
return I;
}```

In addition, we can also achieve the traversal of images through recursive methods provided by opencv:

```Mat& ScanImageAndReduceIterator(Mat& I,const uchar* const table)
{
CV_Assert(I.depth() == CV_8U);
const int channels = I.channels();
switch(channels)
{
case 1:
{
MatIterator_<uchar> it,end;
for( it = I.begin<uchar>(),end = I.end<uchar>();it != end;++it)
{
*it = table[*it];
}
break;
}
case 3:
{
MatIterator_<Vec3b> it,end;
for(it = I.begin<Vec3b>(),end = I.end<Vec3b>();it != end;++it)
{
(*it)[0] = table[(*it)[0]];
(*it)[1] = table[(*it)[1]];
(*it)[2] = table[(*it)[2]];
}
break;
}
}
return I;
}```

At the same time, the at method can be used to calculate the image coordinates in real time to realize the traversal of the image, and the new definition of Mat_< Vec3b> _I is to encode the lazy way, and can use the () operator directly instead of the at function.

```Mat& ScanImageAndReduceRandomAccess(Mat& I,const uchar * const table)
{
CV_Assert(I.depth() == CV_8U);
const int channels = I.channels();
switch(channels)
{
case 1:
{
for (int i = 0;i < I.rows;++i)
for (int j = 0; j < I.cols; ++j)
{
I.at<uchar>(i,j) = table[I.at<uchar>(i,j)];
}
break;
}
case 3:
{
Mat_<Vec3b> _I = I;
for (int i = 0;i < I.rows; ++i)
for (int j = 0;j < I.cols; ++j)
{
//_I.at<Vec3b>(i,j)[0] = table[_I.at<Vec3b>(i,j)[0]];
//_I.at<Vec3b>(i,j)[1] = table[_I.at<Vec3b>(i,j)[1]];
//_I.at<Vec3b>(i,j)[2] = table[_I.at<Vec3b>(i,j)[2]];
_I(i,j)[0] = table[_I(i,j)[0]];
_I(i,j)[1] = table[_I(i,j)[1]];
_I(i,j)[2] = table[_I(i,j)[2]];
}
I = _I;
break;
}
}
return I;
}
```

OpenCVThe library also provides a quick lookup table library function for us.

```Mat lookUpTable(1, 256, CV_8U);
uchar* p = lookUpTable.ptr();
for( int i = 0; i < 256; ++i)
p[i] = table[i];
LUT(I, lookUpTable, J);```

Finally, we attach the whole program source code, get the image by calling the camera, and then use the look-up table to reduce the color space of the first 100 frames.

```#include<opencv2/opencv.hpp>
#include<cv.h>

using namespace cv;
using namespace std;

Mat& ScanImageAndReduceC(Mat& I,const uchar* const table)
{
//accept only char type matrices
CV_Assert(I.depth() == CV_8U);
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols*channels;
if(I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}

int i,j;
uchar *p;
for ( i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for(j = 0;j < nCols;++j)
{
p[j] = table[p[j]];
}
}
return I;
}

Mat& ScanImageAndReduceIterator(Mat& I,const uchar* const table)
{
CV_Assert(I.depth() == CV_8U);
const int channels = I.channels();
switch(channels)
{
case 1:
{
MatIterator_<uchar> it,end;
for( it = I.begin<uchar>(),end = I.end<uchar>();it != end;++it)
{
*it = table[*it];
}
break;
}
case 3:
{
MatIterator_<Vec3b> it,end;
for(it = I.begin<Vec3b>(),end = I.end<Vec3b>();it != end;++it)
{
(*it)[0] = table[(*it)[0]];
(*it)[1] = table[(*it)[1]];
(*it)[2] = table[(*it)[2]];
}
break;
}
}
return I;
}

Mat& ScanImageAndReduceRandomAccess(Mat& I,const uchar * const table)
{
CV_Assert(I.depth() == CV_8U);
const int channels = I.channels();
switch(channels)
{
case 1:
{
for (int i = 0;i < I.rows;++i)
for (int j = 0; j < I.cols; ++j)
{
I.at<uchar>(i,j) = table[I.at<uchar>(i,j)];
}
break;
}
case 3:
{
Mat_<Vec3b> _I = I;
for (int i = 0;i < I.rows; ++i)
for (int j = 0;j < I.cols; ++j)
{
//_I.at<Vec3b>(i,j)[0] = table[_I.at<Vec3b>(i,j)[0]];
//_I.at<Vec3b>(i,j)[1] = table[_I.at<Vec3b>(i,j)[1]];
//_I.at<Vec3b>(i,j)[2] = table[_I.at<Vec3b>(i,j)[2]];
_I(i,j)[0] = table[_I(i,j)[0]];
_I(i,j)[1] = table[_I(i,j)[1]];
_I(i,j)[2] = table[_I(i,j)[2]];
}
I = _I;
break;
}
}
return I;
}

Mat& ScanImageAndReduceLut(Mat& I,Mat& J,const uchar * const table)
{
Mat lookUpTable(1,256,CV_8U);
uchar* p = lookUpTable.ptr();
for ( int i = 0;i < 256; ++i)
p[i] = table[i];
LUT(I,lookUpTable,J);
return J;
}

int main( )
{
Mat frame_input,frame_src,frame_reduce_c,frame_reduce_iterator,frame_reduce_random_access,frame_reduce_lut;
VideoCapture capture(0);
if(capture.isOpened())
{
printf("Open the camera and succeed \n ");Capture > > frame_input;Printf ("image resolution:%d *%d, channel number%d\n", frame_input.rows, fraMe_input.cols, frame_input.channels ());}/ / definition look-up tableUchar table[256];Int divideWidth = 30;For(int i = 0; I < 256; ++i){Table[i] = (uchar) (divideWidth* (i/divideWidth));}Float time_cnTs_c = 0, time_cnts_iterator = 0, time_cnts_random_access = 0, time_cnts_lut = 0;Double tick = 0, numbEr = 0;While (number < 100) {++number;Printf ("read the%f frame image \n", number);Capture > >Frame_input;If (frame_input.empty ()) {Printf ("-- (! (!) No captured frame -- Break!");}ELSE{Frame_src = frame_input.clone ();Frame_reduce_c = frame_input.clone ();Frame_reduce_iteRator = frame_input.clone ();Frame_reduce_random_access = frame_input.clone ();Tick = getTickCount ();ScanImageAndReduceC (frame_reduce_c, table);Time_cnts_c + = ((double) getTickCount () - TiCK) *1000 / getTickFrequency ();Tick = getTickCount ();ScanImageAndReduceIterator (frame_reduce_iterator, table);Time_cnts_iterator + = ((double) getTickCount () - tick) *1000 / getTickFrequency ();Tick = getTickCount ();ScanImageAndReduceRandomAccess (frame_reduce_random_access, table);Time_cnts_random_access + = ((double) getTickCount () - tick) *1000 / getTickFrequency ();Tick = getTIckCount ();ScanImageAndReduceLut (frame_src, frame_reduce_lut, table);Time_cnts_lut + = ((double)) getTickCount () - tick) *1000 / getTickFrequency ();Imshow ("original image", frame_src);Imshow ("ScanImaGeAndReduceC ", frame_reduce_c);Imshow ("ScanImageAndReduceIterator", frame_reduce_iterator);IMSHow ("ScanImageAndReduceRandomAccess", frame_reduce_random_access);Imshow ("ScanImageAndReduceLut",Frame_reduce_lut);}WaitKey (10);}Printf ("time_cnts_c:%f\n", time_cnts_c/100);Printf ("tIme_cnts_iterator:%f\n ", time_cnts_iterator/100);Printf ("time_cnts_random_access:%f\n", time_cnts_raNdom_access/100);Printf ("time_cnts_lut:%f\n", time_cnts_lut/100);WaitKey (1000000);Return 0;}```

## Six. Experimental results

opencvThe time reference given in the tutorial is as follows:

https://docs.opencv.org/master/db/da5/tutorial_how_to_scan_images.html

 Method Time Efficient Way 79.4717 milliseconds Iterator 83.7201 milliseconds On-The-Fly RA 93.7878 milliseconds LUT function 32.5759 milliseconds

The actual test results in our environment (480*640,3 channel) are as follows:

 Method Time Efficient Way 4.605026 milliseconds Iterator 92.846123 milliseconds On-The-Fly RA 240.321487 milliseconds LUT function 3.741437 milliseconds

The experimental results show that the LUT function with OpenCV is the most efficient. This is because of the multithreaded reasons built in OpenCV. The second is the efficient [] array access in the C language.