sreye - fisheye image dewarping module for nvr and viewer.pdf
DESCRIPTION
Describes the functions of the SREye dewarping module and its usage for the fishw\eye dewarping NVR or viewer. SREye provides flexible scene adjustment and is easy to use.TRANSCRIPT
Note on the fisheye image dewarping and SREye.DLL
(Young Serk Shim, SRVISION, [email protected])
1 Fisheye image dewarping
1.1 What is fisheye image dewarping?
Fisheye camera generates a ultra wide angle image which usually have severely warped
so called a fisheye image. The advantage of fisheye camera or image is that it provides
ultra wide angle view without blind spot, which is not possible with conventional
surveillance camera of relatively narrow field of view. Despite some drawbacks, partly or
as a whole, the fisheye camera system will a strong candidate for most of future
surveillance solution. The application areas will be no blind spot, wide area surveillance
and safety increasing surround view system for vehicles such as cars, ships, tanks, and
various heavy equipments.
Generally, for fisheye image dewarping, we try to generate or get a number of
perspective or panoramic views of appropriate angle of field of view, PAN/TILT angles,
and of appropriate change of the orientation of the view plane(equivalent to the
movement of the view point). For this, the input fisheye image is spatially transformed
by the backward mapping designeded by the so called scene parameters such as the
angle of field of view, PAN/TILT angles, and the orientation of the view plane. For
panoramic view, the cylindrical projection is usually used.
For dewarping transformation, the chracterization of fisheye camera parameters such as
the coordinates of optical origin, lens distortion function, pixel aspect ratio, so called
the camera intrinsic parameters should be done by any means. For multi-fisheye
camera application, additional estimation of extrinsic parameters of individual cameras
will be necessary for the construction of virtual 3D views from distributed fisheye
cameras.
In this note, the function and the usage of SREye.dll will be described. SREye.dll is
designed to provide dewarped fisheye images once the camera parameters and the
scene parameters are defined. It provides flexible size of output image, in a
channelized or composite format. SREye has been developed for supporting
Server/Client NVR and fisheye viewer supporting multiple (1~16) fisheye cameras and
can be applied for fisheye image viewing smartphones. If this module is embedded in
the IP camera FPGA, then camera embedded fisheye dewarping solution is also
possible.
First, the scene parameters which determines the view will be briefly explained. Next,
the general flow of fisheye image dewarping will be given, and the functional flow will
be summarized. After then, the detailed descrition and explanation of our software
module will be given. It includes the necessary data declaration and procedural steps
for fisheye camera image dewarping.
The advantage of SREye. Dll will be summarized as
- Supports various input/output image format (YUV422, YUV420, RGB)
- Supports composite or channelized outs for several user defined perspective
and/or panoramic(cylindrical projection) channels.
- Supports flexible scene parameters so that user can generates the views he wants
to see in easy way using user-developed GUI.
- For a specific perspective view, we can easily change the view by Pan/TILT,
ELEVATION/SLANT, ROTATION, and ZOOM operation.
The following figures shows the examples of fisheye dewarped images. The top view
is obtained from the camera installed on the top and looking downwards to the
ground. The front view is obtained from the camera looking forward as the normal
camera but with ultra wide angle of view. The snapshot captured from the fisheye NVR
is also given. Our SREye module is licensed and operating on the NVR system where
we can record and monitor upto 64 cameras including upto 9 fisheye cameras.
[Top View]
1.2 Scene parameters
The dewarped image from the input fisheye image will be governed by scene
parameters which are given by the user. The scene parameters can be differently
defined, if want to control the view in other ways. The scene parameters we are using
are given below with some brief comments for perspective view and the panoramic
view, respectively. For the explanation of altitude and azimuth, the figure above will
be helpful. If the vector 𝑃𝑃 denotes the viewing vector, then it corresponds to the view
angle. The view plane is normal to the vector 𝑃𝑃.
1.2.1 Scene parameters for perspective view
1.2.1.1 Output image size: width and height in pixel. (ex. 640x480)
1.2.1.2 Horizontal field of view: Initially set as 90°, and changed by the ZOOM
control (0°~180°)
1.2.1.3 Center of view (COV or POV, the point we are looking at): This point will be
displayed as the center of the output image of the perspective view. This
quantity will be expressed as the coordinates on the pixel-based input
source image plane.
1.2.1.4 ELEVATION/SLANT angles: View angle means the orientation of the view
plane and expressed as the altitude and azimuth angle (𝜙𝜙,𝜃𝜃) of the normal
vector in the spherical coordinate systemas in the above firgure. When the
view angles are determined by the PAN/TILT operation, the azimuth and
altitude angles will be the same as PAN and TILT angles, respectively. By
ELEVATION and/or SLANT operation, the view angles will be increased or
decreased from the PAN/TILT angles.
1.2.1.5 ROTATION angle: Rotation angle means the amount of rotation around the
center of image clockwise or counter clockwise.
1.2.1.6 ZOOM: ZOOM is the increment or decrement of the horizontal field of view.
1.2.1.7 TRANSLATION: Translation is a shift of the center of view on the view plane.
Usually, this is not so necessary function in view control.
1.2.2 Scene parameters for panoramic view (cylindrical projection)
1.2.2.1 Output image size: (width, height) in pixel. (ex. 640x240)
1.2.2.2 Horizontal field of view: Fixed as 180°.
1.2.2.3 Vertical field of view: Initially set as 61° for 8:3 output image aspect ratio.
1.2.2.4 Center of view (the point we are looking at): This point will be displayed as
a center of the output image of the panoramic view. This quantity will be
expressed as the coordinates on the pixel-based input source image plane.
1.3 Block diagram
Image dewarping consists of the camera calibration stage, Setup scene parameters
stage, and the multi-channel dewarping stage. SREye.dll is for the multi-channel
dewarping.
1.3.1 Calibration of fisheye camera intrinsic parameters
As already mentioned, for the satisfactory fisheye image dewarping, the camera
intrinsic parameters should known and exploited in the computation of dewarping
spatial transform. They are
1.3.1.1 PixelAspectRatio
1.3.1.2 Optical origin of image in pixel based sensor coordinates
1.3.1.3 Radial distance corresponding to elevation angle 0°.
1.3.1.4 Radial and tangentail distortion function
For providing the necessary camera parameters, the appropriate calibration tool
for specific application should be provided and/or the manufacturer’s data should
be provided. Sometimes, we use manufacturer’s data for some parameters and
further extract the remaining parameters using a calibration tool.
1.3.2 Set up scene parameters
1.3.2.1 Output image format: The output image consists of several sub-images
which are called channel images. Each channel image might be a
perspective or a panoramic view of which scene parameters are set up
before dewarping channel by channel.
1.3.2.2 Output image size[ch]: (width, height)
1.3.2.3 Horizontal field of view[ch]
1.3.2.4 Center of view[ch]
1.3.2.5 View angle increment[ch]: For perspective views, we can change the view
angle which is initially determined by the center of view only by changing
the altitude and azimuth angle by ELEVATION and/or SLANT operation.
1.3.2.6 Zoom[ch]: change of horizontal field of view
1.3.2.7 Rotation[ch]: rotation of viewing plane around the normal vector of the
plane
1.3.3 Authentication: for module protection by checking secret key value (internal use)
1.3.4 Image dewarping:
1.3.4.1 Input image frame buffer pointer
1.3.4.2 Get dewarped output image frame by reading from the channelized or
composite output image.
Usually, the set up of scene parameters will be done by graphical user interface,
separately designed to interactively adjust the scene parameters.
2 How to use SREye.dll
2.1 Functions
2.1.1 The dll consists of four functions which the user can use for image dewarping.
2.1.1.1 Function SREyeInitP
2.1.1.1.1 Authentication by checking secret
For the authentication of the SREYE module, the checking of the secret is
done. For NVR server, NVR client, and the camera viewer, the key file, the
key from the NVR server, and the camera key will be used respectively.
2.1.1.1.2 Initialization
2.1.1.1.2.1 Initialize scene parameters for dewarping
2.1.1.1.2.2 Allocate Output image frame buffer
2.1.1.1.2.3 Compute spatial dewarping transform for each channel
2.1.1.2 Function SREye
2.1.1.2.1 Update spatial dewarping transform for each channel if the camera
posture for each channel changed
2.1.1.2.2 Generate dewarped output image frame and store in the output
image frame buffer
2.1.1.3 Function CloseSREye:
This function frees allocations previously made,
2.1.1.4 DetachSREye
This function will make the fisheye camera free from the dewarping service by
SREye.dll.
2.2 Data types and declaration (SREye.h, header file provided by SRVISION)
#define SREYE ("SREye")
#define NUM_CHANNEL (4)
#define NUM_PAN_CHANNEL (2)
#define CH_0 (0x0001)
#define CH_1 (0x0002)
#define CH_2 (0x0004)
#define CH_3 (0x0008)
//#define CH_4 (0x0010)
//#define CH_5 (0x0020)
//#define CH_6 (0x0040)
//#define CH_7 (0x0080)
#define CH_A (0x0100)
#define CH_B (0x0200)
#define CH_INT (0x0800) // interpolation or not
#define CH_OVER (0x1000) // internal use
enum SREYE_MODE {
SERVER,
CLIENT
};//Mode selection for server/client module
enum FISHEYE_TYPE {
EQUIDISTANT = 0,
STEREOGRAPHIC
};//Lens type selection, this will become obsolete.
typedef struct _FISHEYE_IMAGE {
int Radius; //radial distance for 𝜙𝜙 = 0. int CenterX;//in pixel-based sensor coordinates
int CenterY; //in pixel-based sensor coordinates
} FISHEYE_IMAGE ; //The structure for storing the calibration data of fisheye camera
typedef struct _PICT_SIZE {
int width;
int height;
int line_size;
} PICT_SIZE;//for storing input, output image size related data
typedef struct _INIMAGE {
char* inP;
} INIMAGE; //pointer for input image data
typedef struct _OUTIMAGE {
char* outP;
char* outP[NUM_CHANNEL];
char* PANoutP[NUM_PAN_CHANNEL];
char* outOVER; // for internal use
} OUTIMAGE; //pointer array for dewarped picture and/or images.
typedef struct _POSTURE {
POINT POV;
int Rotation;
int Zoom;
int Elevation;
int Slant;
} POSTURE;//Parameters for perspective views
typedef struct _POINTOFVIEW {
POSTURE CAM_POSTURE[NUM_CHANNEL];
POINT PAN_POV[NUM_PAN_CHANNEL];
} POINTOFVIEW; ;//for storing parameters for multi-channel views
typedef struct _SREYE_PARAMETER {
SREYE_MODE SREye_Mode;
PICT_SIZE Src_Size; // (1280X960) etc...
FISHEYE_IMAGE Fish_Image;
POINTOFVIEW *Point_of_View;
void *Lens_Parameter;// internal use
UINT channel; // CH_0 or CH_1 or CH_2 or CH_3 CH_A CH_B
CH_INT CH_OVER, CH_OVER for internal use
PICT_SIZE Out_Size; // 640x480, 320x240
PICT_SIZE CH_Out_Size; // 320x240 ...
PICT_SIZE Pan_Out_Size; // 640 X 240
INIMAGE InImage; // YUV Buffer pointer
OUTIMAGE OutImage; // unwarped picture pointer
char* IpAddress; // 192.168.0.xxx
char* ClientKey; // Client Key
} SREYE_PARAMETER;
__declspec(dllimport)
bool SREye(SREYE_PARAMETER *arg);// Main Converting Routine
__declspec(dllimport)
int InitSREye(SREYE_PARAMETER* arg);// Initialize for SREye
__declspec(dllimport)
void CloseSREye(void);// Close SREye
__declspec(dllimport)
void DetachSREye(void);
__declspec(dllimport) // for server:client authentication
void GetSREyeClientKey(char **arg);
__declspec(dllimport) // for server:management
void ReleaseSREyeClient(char **arg);
2.3 Programming step (Explanation of notes for SREye.DLL, wiil be updated. Subject to
minor modifications for each version)
2.3.1 Add the followings in the header file.
#include "SREye.h"
SREYE_PARAMETER SREyeInitP;
2.3.2 Open camera
2.3.3 Get image size of input source image: g_nWidth, g_nHeight
2.3.4 New Qbuffer (input image frame buffer for fisheye image, YUV or RGB) and
initialize
Qbuffer = new char[g_nWidth*g_nHeight*BYTES_PER_PIXEL];
// (ex. BYTES_PER_PIXEL RGB = 3 or 4;YUV =2)
2.3.5 Set up parameters(camera parameters and/or scene parameters for each channel
perspective or panoramic) and initialize SREye.
2.3.5.1 Delcarations
FISHEYE_TYPE LensType= EQUIDISTANT; // or STEREOGRAPHIC
char *ClientKey;
POINTOFVIEW PointOfView; // sensor plane coordinate value
2.3.5.2 Setup parameters
2.3.5.2.1 Setup for operation
SREyeInitP.SREye_Mode = SERVER; or //for server
SREyeInitP.SREye_Mode = CLIENT; // for client
GetSREyeClientKey(&ClientKey) ; // for client
SREyeInitP.ClientKey = ClientKey; // for client
2.3.5.2.2 Setup camera parameters
SREyeInitP.Lens_Parameter = (void *) (&LensType);
SREyeInitP.Fish_Image.CenterX = g_nWidth/2; // example
SREyeInitP.Fish_Image.CenterY = g_nHeight/2; // example
SREyeInitP.Fish_Image.Radius = g_nHeight/2; // example
SREyeInitP.Src_Size.width = g_nWidth;
SREyeInitP.Src_Size.height = g_nHeight;
2.3.5.2.3 Setup scene parameters for each channel
PointOfView.CAM_POSTURE[0].POV = CPoint(x0,y0);
PointOfView.CAM_POSTURE[1].POV = CPoint(x1,y1);
PointOfView.CAM_POSTURE[2].POV = CPoint(x2,y2);
PointOfView.CAM_POSTURE[3].POV = CPoint(x3,y3);
PointOfView.PAN_POV[0] = CPoint(g_nWidth/2, g_nHeight/2); // example
PointOfView.PAN_POV[1] = CPoint(g_nWidth/2, g_nHeight*3/4); // example
PointOfView. CAM_POSTURE[0].Rotation = 0; // (-1440 ~ +1440) = (-360° ~ +360°)
PointOfView. CAM_POSTURE[1].Rotation = 20;
PointOfView. CAM_POSTURE[2].Rotation = 36;
PointOfView. CAM_POSTURE[3].Rotation = 18;
PointOfView. CAM_POSTURE[0].Zoom = 0; // (-180 ~ +180) = (FOV: 0° ~ 180°)
PointOfView. CAM_POSTURE[1].Zoom = 0;
PointOfView. CAM_POSTURE[2].Zoom = 0;
PointOfView. CAM_POSTURE[3].Zoom = 0;
PointOfView. CAM_POSTURE[0].Slant = 0; // (-720 ~ +720) = (180° ~ +180°)
PointOfView. CAM_POSTURE[1].Slant = 0;
PointOfView. CAM_POSTURE[2].Slant = 0;
PointOfView. CAM_POSTURE[3].Slant = 0;
PointOfView. CAM_POSTURE[0].Elevation = 0; //(-360 ~ +360 = ) = (-90° ~ +90°)
PointOfView. CAM_POSTURE[1].Elevation = 0;
PointOfView. CAM_POSTURE[2].Elevation = 0;
PointOfView. CAM_POSTURE[3].Elevation = 0;
SREyeInitP.Point_of_View = &PointOfView;
2.3.5.2.4 Setup parameters for input, output of dewarping
// Channels on a CAMERA
SREyeInitP.channel = CH_0 | CH_1 | CH_2 | CH_3 | CH_INT; or
SREyeInitP.channel = CH_0 | CH_3 | CH_A; or
SREyeInitP.channel = CH_0 | CH_3 | CH_A | CH_INT or
SREyeInitP.channel = CH_A | CH_B | CH_INT; etc
//Input Image
SREyeInitP.InImage.inY = Qbuffer;
SREyeInitP.Out_Size.width = 640;
SREyeInitP.Out_Size.height = 480;
SREyeInitP.Out_Size.line_size =???;//user defined
//Image size for perspective channels
SREyeInitP.CH_Out_Size.width = 320;
SREyeInitP. CH_Out_Size.height = 240;
SREyeInitP. CH_Out_Size.line_size =???;//user defined
//Image size for panoramic channels
SREyeInitP.Pan_Out_Size.width = 640;
SREyeInitP.Pan_Out_Size.height = 240;
SREyeInitP.Pan_Out_Size.line_size =???;//user defined
if ((int ret = InitSREye(&SREyeInitP)) != 0)
error during intialization ;
During this InitSREye call, authentication and spatial dewarping transform computation
will be carried out.
2.3.6 Get dewarped output image
2.3.6.1 Frame buffer pointer for dewarped image (examples)
2.3.6.1.1 YUV420 format
// YUV image pointers for perspective channels
outY[ch]= SREyeInitP.OutImage.outP[ch]
outU[ch] = outY[ch] +CH_Out_Size.height* Out_Size.line_size
outV[ch] = outY[ch] + CH_Out_Size.height* Out_Size.line_size*5/4.
// YUV image pointers for panorama channels
PANoutY[ch]= SREyeInitP.OutImage.PANoutP[ch]
PANoutU[ch] = PANoutY[ch] + Pan_Out_Size.height* Pan_Out_Size.line_size
PANoutV[ch] = PANoutY[ch] + Pan_Out_Size.height*
Pan_Out_Size.line_size*5/4.
The value of the pointer for non-existing channel will be assigned to NULL.
// YUV image pointers for composite picture
outYUV= SREyeInitP.OutImage.outP
2.3.6.1.2 YUV422 format
// YUV image pointers for perspective channels
outY[ch]= SREyeInitP.OutImage.outP[ch]
outU[ch] = outY[ch] + CH_Out_Size.height* Out_Size.line_size
outV[ch] = outY[ch] + Out_Size.height* Out_Size.line_size*3/2.
// YUV image pointers for panorama channels
PANoutY[ch]= SREyeInitP.OutImage.PANoutP[ch]
PANoutU[ch] = PANoutY[ch] + Pan_Out_Size.height* Pan_Out_Size.line_size
PANoutV[ch] = PANoutY[ch] + Pan_Out_Size.height*
Pan_Out_Size.line_size*3/2.
The value of the pointer for non-existing channel will be assigned to NULL.
// YUV image pointers for composite picture
outYUV= SREyeInitP.OutImage.outP
1.1.1.1.1 RGB format
// RGB image pointers for four perspective channels
outR[ch]= SREyeInitP.OutImage.outP[ch]
outG[ch] = outR[ch] + CH_Out_Size.height* Out_Size.line_size
outB[ch] = outG[ch] + CH_Out_Size.height* Out_Size.line_size
// RGB image pointers for two panorama channels
PANoutR[ch]= SREyeInitP.OutImage.PANoutP[ch]
PANoutG[ch] = PANoutR[ch] + Pan_Out_Size.height* Pan_Out_Size.line_size
PANoutB[ch] = PANoutG[ch] + Pan_Out_Size.height*
Pan_Out_Size.line_size
The value of the pointer for non-existing channel will be assigned to NULL.
// RGB image pointers for composite picture
outRGB= SREyeInitP.OutImage.outP
1.1.1.2 Get picture (1 frame)
LOOP {
if(1 frame is ready) // If the data is available,
{
SREye(&SREyeInitP);
~~~
}
}
1.1.2 Program terminate (Close camera)
CloseSREye();
DetachSREye(); // before ending thread
* For defining POV, we use two options. One is to use pixel-based source image coordinates, and the
other is to use dewarped image coordinatesfor each channel. When using the latter option, we make
the sign as negative. This option is for the convinience for the design fo graphical user interface
setting the scene parameters.
* The key file with name “SREYE” should be placed in the folder where the SREYE.dll is placed.