underwaterno-referenceimagequalityassessmentfordisplay...
TRANSCRIPT
Research ArticleUnderwater No-Reference Image Quality Assessment for DisplayModule of ROV
Di Wu Fei Yuan and En Cheng
Key Laboratory of Underwater Acoustic Communication and Marine Information Technology Xiamen UniversityXiamen 361001 China
Correspondence should be addressed to En Cheng chengenxmueducn
Received 24 April 2020 Revised 22 May 2020 Accepted 3 August 2020 Published 28 August 2020
Academic Editor Chao Huang
Copyright copy 2020 DiWu et al ampis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
ampe optical images collected by remotely operated vehicles (ROV) contain a lot of information about underwater (such asdistributions of underwater creatures and minerals) which plays an important role in ocean exploration However due to theabsorption and scattering characteristics of the water medium some of the images suffer from serious color distortion ampesedistorted color images usually need to be enhanced so that we can analyze them further However at present no image en-hancement algorithm performs well in any scene amperefore in order to monitor image quality in the display module of ROV ano-reference image quality predictor (NIPQ) is proposed in this paper A unique property that differentiates the proposed NIPQmetric from existing works is the consideration of the viewing behavior of the human visual system and imaging characteristics ofthe underwater image in different water types ampe experimental results based on the underwater optical image quality database(UOQ) show that the proposed metric can provide an accurate prediction for the quality of the enhanced image
1 Introduction
In recent years there have been a growing number of ocean-related activities such as aquaculture hydrological explo-ration and underwater archaeology ampe optical imagescollected by the observational remotely operated vehicle(ROV) provide very convenient conditions for these ac-tivities and high-quality underwater images play an essentialrole in these activities However due to the absorption andscattering effects of water limiting the visibility of the un-derwater objects the images captured by an optical sensor ofROV often suffer from diminished color (color distortion)which affects our understanding of underwater conditionsso poor quality underwater images need to be enhanced It isworth noting that not all underwater images need to beenhanced as shown in Figure 1 because bodies of waterexhibit extreme differences in their optical properties Somelakes are as clear as distilled water and some change colorsseveral times a year among white blue green and brown[1] In the ocean coastal harbors are often murky whileoffshore waters are blue and clear Simply put whether an
image needs to be enhanced depends on whether the visi-bility of the underwater objects in the image is goodHowever the existing display module of observational ROVeither displays the captured image directly in the terminal orintegrates an enhancement algorithm in the system todisplay the captured image after enhancement However theexisting display module of observational ROV either dis-plays the captured image directly in the terminal or inte-grates an enhancement algorithm in the system to displaythe captured image after enhancement None of them de-termines whether the image needs to be enhanced or notAlso currently there is no underwater image enhancementalgorithm suitable for any scene so we need reliable un-derwater image quality metrics to help us preassess whetherthe captured image needs to be enhanced or not and tomonitor the quality of the enhanced image
ampe most accurate methods of image quality estimationare subjective image quality assessment (IQA) Howeversubjective IQA is expensive time-consuming and im-practical for real-time implementation and system inte-gration In order to automatically estimate image quality and
HindawiScientific ProgrammingVolume 2020 Article ID 8856640 15 pageshttpsdoiorg10115520208856640
save workforce and resources a reliable underwater ob-jective image quality metric needs to be designed In un-derwater image processing scenarios an ideal referenceimage is usually not available so no-reference (NR) IQA isthe best choice for evaluating underwater image quality
Some generic image quality measures such as histogramanalysis [2] the Variance [3] Image Entropy [4] and colorimage quality measure (CIQI) [5] have been developedPopular BRISQUE [6] and LPSI [7] have also been proposedwhich summarize the statistical rules of natural images andcalculate the degree of deviation of distorted imagesHowever these objective measures are not designed spe-cifically for underwater images ampey fail to consider thestrong absorption and scattering effects of the water andthey are not applicable to underwater images ampere are alsoNR IQA based on in-depth learning such as DIQA [8] DeepIQA [9] and RankIQA [10] which perform well in airimages ampe deep learning method has a strong learningability and can automatically extract image features How-ever the method requires a large amount of data (usuallymore than 5000 images) for training and the acquisition ofsubjective scores (as the ground truth during training) isexpensive and time-consuming Currently there is no rel-evant dataset available for training in the underwater imagefield so it is temporarily impossible to design NR-IQA basedon deep learning for underwater images
Some paper [11] pointed out that the overall quality of animage can be effectively obtained by combinations of imageattribute measures At present the most commonly usedunderwater image quality measures UCIQE [12] UIQM[11] and CCF [13] are designed based on this principle ampeUCIQE [12] proposed by Yang Miao is a linear combinationof the standard deviation of chroma the contrast of lumi-nance and the average of saturation Karen Panettarsquos UIQM[11] is a linear combination of colorfulness sharpness andcontrast ampe CCF proposed by Yan Wang et al [13] startsfrom the imaging analysis of underwater absorption andscattering characteristics calculates the fog density indexand evaluates the underwater image quality by combining
the color index contrast index and fog density index ampeyall determine the weighting coefficients by multivariatelinear regression from the training image set Howeverregardless of the performance of the training set the gen-eralization ability of these methods is largely limited by thetraining image samples At the same time the attentionmechanism of the human visual system (HVS) [14] has notbeen paid enough attention in the underwater image eval-uation In underwater scenes the image quality of the targetobject has higher research value and practical significancethan that of the ocean background which does not belong tothe region of interest (ROI) Moreover the three commonlyused underwater metrics measure the image quality from theperspective of image statistics and the robustness is nothigh ampis results in an overemphasis on color richness ampispaper holds that in addition to the color fidelity in thestatistical sense the color fidelity of objects is also veryimportant from the perspective of pixels It is worth notingthat the color fidelity here refers to whether the image coloris reasonable not to say the difference between the objectcolor in the image and the real object color Most of theunderwater natural images are blue-green due to color se-lective attenuation and the color is single and the colorrichness is not ideal (as shown in Figures 2(a)ndash2(c)) Afterthe enhancement algorithm processing the underwaterimage can generally eliminate the color attenuation from thevision and the color richness is greatly improved as shownin Figures 2(d)ndash2(f ) but the color fidelity of the enhancedimage is questionable and the color of the fish inFigures 2(d) and 2(e) is not reasonable and the artificialfacilities in Figure 2(f ) are obviously different from what weknow ampat is Figures 2(a)ndash2(c) have high color fidelity(because they are real natural images although the color ofobjects in them is different from that of real objects the pixelcolor is reasonable) but the color richness is very lowFigures 2(d)ndash2(f ) have low color fidelity and high colorrichness ampat is the underwater absorption and scatteringcharacteristics cause image color distortion (where the typeof distortion is the large difference between the object color
(a) (b)
Figure 1 (a) Images that do not need to be enhanced (b) Images that need to be enhanced
2 Scientific Programming
in the image and the color of the real object) the image tendsto be blue-green and the color richness of the image isreduced Overemphasis on color richness can also result incolor distortion (in this case the kind of distortion refers tothe unreasonable color in the image) which affects theviewing effect of the image and subsequent use of the image
In view of the shortcomings of existing metrics this paperproposes a no-reference image quality predictorNIPQNIPQ isdesigned with a three-stage framework ampe first stage focuseson the attentionmechanism of the human visual system (HVS)which can also be interpreted as ROI Because in the IQA fieldour ROIs are not fixed some are task-driven some are data-driven some such as fish corals divers or even artifacts ofunknown shapemay be of interest to us Formore applicationswe interpret the foreground area (nonocean background) asROIampis paper extracts ROI based on backgroundforegroundand focuses on the image quality of ROI ampe second stageconsiders the impact of color richness on image quality As thedistance between the camera and the object increases (hori-zontally) the color of the object in the underwater image willkeep approaching blue and green [1] At the same time as theposition of the optical sensor gets deeper the object will befarther away from the sunlight source and the color of theobject will be darker and the contrast will be lower It can beunderstood that if the ROI of a natural underwater image hasgood color richness its image quality will be significantly betterthan that of a low color richness image ampe third stage
considers the fidelity of the color As mentioned earlier if NRIQA overemphasizes color richness it will cause the enhancedunderwater image to become oversaturated which is also aform of color distortion Inspired by the underwater imageformation model we distinguish the water types (yellow watergreen water and blue water) in the image by the oceanbackground area of the image and estimate the reasonablerange of pixel intensity of ROI in the enhanced underwaterimage from the perspective of pixels In this stage the differencebetween the reasonable range of pixel intensity and the ROIpixel intensity of the actual enhanced image is used to representthe rationality of the enhanced image that is color fidelityFinally in the fourth stage color richness and color fidelity aresystematically integrated for quality prediction
In order to measure the performance of NIPQ a un-derwater optical image quality database (UOQ) is estab-lished ampe database contains some typical underwaterimages and their mean opinion scores (MOS) Based on thecomprehensive analysis of all experimental results thecontribution of NIPQ proposed in this paper is summarizedas follows
(a) It is a kind of NR IQA inspired by underwaterimaging characteristics By considering the colorattenuation of images in different water bodies thecolor fidelity and color richness metrics areproposed
(a) (b) (c)
(d) (e) (f )
Figure 2 (a) (b) and (c) are original images (d) (e) and (f) are images after enhancement algorithm Although the color attenuation iseliminated visually the color is still distorted and it is oversaturated
Scientific Programming 3
(b) By adopting a suitable ROI extractionmethod for theunderwater IQA field ROI and IQA are effectivelycombined due to the block strategy in the ROI ex-traction method
(c) It is superior to many commonly used IQA metricsand can effectively evaluate the performance of theimage enhancement algorithm and can be used asquality supervision
(d) We propose NR IQA-based underwater smart imagedisplay module which embodies the role of our IQAin application
We arrange the reminder of this paper as follows Section 2describes the NR IQA-based underwater smart image displaymodule which is the application background ofNR IQA Section3 describes the detail of our NIPQ metric explicitly Section 4describes the establishment of our database UOQ for evaluatingIQA performance which consists of underwater optical imagesand their enhanced images In Section 5 performance com-parisons of the NIPQ metric with selected existing NR IQAmethods are performed We conclude this paper in Section 6
2 NR IQA-Based Underwater Smart ImageDisplay Module
Most of the underwater images captured by optical sensorshave practical applications For underwater images withsevere color distortion image enhancement is often neededbefore the terminal display However not all underwaterimages need to be enhanced We believe that whether theimage needs to be enhanced or not depends on the visibilityof the underwater objects Besides because no image en-hancement algorithm can achieve good results in all scenesNR IQA can be used as a guide for image enhancement sothat the system can automatically select more appropriateimage enhancement algorithm in real time From the ap-plication point of view the framework of the NR IQA baseddisplay module is shown in Figure 3 ampe traditional imagedisplay module only provides a single image enhancementscheme or displays the image directly which cannot flexiblycope with the different water environment ampe displaymodule proposed in this paper builds various image en-hancement algorithms into the Image Enhancement Algo-rithm Database ampe system can choose a more appropriateenhancement algorithm according to the results of NR IQAFirstly the input underwater image is preassessed and thenatural image with less color distortion is directly displayedAnd for the natural image with severe color distortion thedefault enhancement algorithm in the Image EnhancementAlgorithm Database is used to enhance the image ampeSelector automatically determines whether to enable thealternative image enhancement scheme and which alter-native scheme to choose according to the results of NR IQA
According to the above analysis of NR IQA-based un-derwater smart image display module and the considerationof the characteristics of underwater image in Section 1 thispaper uses the color richness of ROI and color fidelity of ROIto estimate image quality In the display module proposed inour paper the color richness of ROI is used as the metric of
pre-NR IQA in the display module and the NIPQ whichcombines ROI color richness and color fidelity will be usedas the metric of NR IQA in the display module
ampe ROI extraction method is based on backgroundforeground ampe block strategy in the extraction methodhelps ROI and IQA better combine ampe ROI extractionmethod is introduced in Section 31 in detail ampe colorrichness represents the distribution of image color in astatistical sense which is described by the spatial charac-teristics of image in CIE XYZ space and detailed in Section32 ampe color fidelity is based on the underwater imageformation model in the sense of pixel which is used todescribe whether the pixel intensity is within a reasonablerange It is introduced in Section 33 in detail
3 Proposed NIPQ Metric
31 ROI Extraction Based on BackgroundForegroundConsidering that the final receiver of display module of ROV isoften human it is particularly important that IQA can reflectthe feeling of human eyes well ampe mechanism of humanvisual attention is an important feature of HVS ampe mecha-nism of human visual attention enables the brain to quicklyunderstand the overall information of the image and obtain theregions that need attention ampen the brain begins to focus onthe target information and suppress other background infor-mation amperefore the human eye is usually sensitive to thedamage of the area of concern At the same time comparedwith the ocean background high-quality ROI has betterpractical significance and value amperefore it is necessary tointroduce ROI into image quality assessment
Researchers usually get ROI by saliency detection orobject detection [15 16] Different from the image in the airthe contrast of most underwater images is low and thetraditional significance detection method in the air is notapplicable in the underwater At present there is no robustsaliency detection algorithm in underwater image fieldSome researchers combine image enhancement with sa-liency detection [17] Some researchers combine fish lo-calization and saliency detection [18] In IQA the purpose ofthe metric is to evaluate the enhancement algorithm and theROI of the underwater image is not always one or severalfixed categories of targets with predictable shapesampereforethe above method is not applicable Considering thatcompared with the target the underwater backgroundfeatures are easier to be recognized this paper extracts ROIbased on backgroundforeground and the process is shownin Figure 4 In order to better combine ROI and IQA in thenext steps the preprocessed underwater image is dividedinto m times n image blocks (we call it the block strategy)
ampen we map the boundary connectivity BndCon(pi)
(definition in [19]) of block i region pi by (1) and obtain thebackground region probability w
bgi
wbgi 1 minus exp 1 minus
BndCon2 pi( 1113857
2σ2BndCon1113888 1113889 (1)
where σ2BndCon 1 ampe background block the target blockand the uncertain block are initially divided by using w
bgi
4 Scientific Programming
thresholdbg and thresholdroi ampen the color feature andspatial position feature of the block are used as the correctionof the uncertain block to help judge the background
probability of the uncertain block which is expressed by amathematical formula as shown in the following equation
WBG(p)
1 wbgi gt thresholdbg
1 minus 1113936N
iminus1dapp p pi( 1113857wspa p pi( 1113857w
bgi thresholdroi ltw
bgi lt thresholdbg 0 w
bgi lt thresholdroi
⎧⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
(2)
wspa p pi( 1113857 exp minusdspa p pi( 1113857
2σ2spa⎛⎝ ⎞⎠ (3)
(1) divide the image into blocksand estimate the background
probability of each block
(3) estimate the background probabilityof uncretain blocks by color and
position
(4) the final ROI is obtained bybinarization
(2) the image is preliminarily divided into background blocks target blocks and uncertain blocks
Background block
Uncertain block
Target blockSpatial location feature
Color feature
Figure 4 ROI extraction process
Image input
Image display
Imagecontent
Imageenhancement
Aboveenhancement-threshold
Belowenhancement-threshold
Pre-NR IQA
Image enhancement algorithmdatabase
Image enhancement algorithmselector
NR IQA
Figure 3 ampe framework of the NR IQA-based underwater smart display module
Scientific Programming 5
where dapp(p q) is the color similarity between blocks p andq which is calculated by the Euclidean distance between theaverage colors of blocks p and q dspa(p q) is the Euclideandistance between blocks p and qwspa(p q) is obtained aftermapping according to (3) among which σ2spa 025 Fi-nally we use the method of maximum variance betweenclasses to get the final ROI
32 Color Richness With the aggravation of the phenom-enon of color attenuation the color of the natural under-water image will become less and less and the visibility of theobject will become worse amperefore the color richness ofROI is a simple and fast metric to measure whether the colordistortion of natural underwater image is serious which issuitable for the evaluation of image quality
In this paper the richness of color is measured by thespatial characteristics of color in CIE XYZ color space ampecolor richness should not only include color diversity butalso consider the lightness distribution soXYZ color space isa good choice CIE XYZ color space can represent all colorsand the Y parameter is the measurement of color lightnessAccording to the XYZ color space distribution of the twoimages shown in Figure 5 the wide distribution of imagecolor respectively in the three dimensions of X Y and Zdoes not mean that the color richness is good ampat is be-cause the three components of X Y and Z have a certaincorrelation So the spatial characteristics of color can betterrepresent the distribution of color According to (4) theimage color divergence in XYZ color space is defined todetermine the color richness of the image
Cd 1113944mn
dis Pmn min Pmn max( 1113857 times max dis Pmn(i j) Pmn min Pmn max1113858 1113859( 1113857 times12 (4)
where dis represents the shortest distance between twopoints or between points and lines and mn belongs to X-YY-Z and X-Z sectionsPmn min and Pmn max represent theclosest and farthest points from the origin respectively
33 Color Fidelity As mentioned in Section 1 the enhancedimage may be oversaturatedpseudobright (as shown inFigure 2) If too much attention is paid to the color richnessthe color of ROI in the image will deviate from the color ofreal objectsamperefore we should not only consider the colorrichness of the enhanced underwater image but also con-sider the color fidelity of ROI that is whether the intensityof pixels is within a reasonable range
It is necessary to understand the formation and deg-radation of underwater images if we want to estimate areasonable range of intensity of pixels ampe formation of theunderwater image is dominated by the following factors[1 20 21]
Ic JCeminus β(DC)timesz
+ Binfinc 1 minus e
minus β(BC)timesz1113872 1113873 (5)
where C R G B is the color channel Ic is the underwaterimage captured by the camera and JCeminus β(DC)timesz is the directsignal recorded as Dc Binfinc (1 minus eminus β(BC)timesz) is a backscatteredsignal which is recorded as Bc z is the distance between thecamera and the photographed object Binfinc is the obscuredlight JC is the unattenuated scene that is the RGB intensityof the surface captured by the sensor with the spectral re-sponse Sc(λ) at the distance z0 0 (generally z0 is regardedas 0)
JC 1kc
1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)dλ (6)
kc is the camerarsquos scaling constant βDc and βB
c have acertain dependence on the spectrum of distance z reflec-tivity ρ(λ) ambient light E(d λ) camerarsquos spectral responseSc(λ) scattering coefficient b(λ) and beam attenuationcoefficient β(λ) as shown in (7) and (8) z0 and (z0 + z) arethe starting and ending points along the line of sight
βDc
ln 1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)z0dλ1113946
λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)(z0+z)dλ1113888 11138891113890 1113891
z
(7)
βBc minus
ln 1113946λ2
λ1Sc(λ)B
infin(λ)e
minus β(λ)zdλ1113946
λ2
λ1Sc(λ)B
infin(λ)dλ1113888 11138891113890 1113891
z
(8)
So we can calculate the unattenuated scene JC as
JC Dceβ(DC)timesz
Ic minus Binfinc 1 minus e
minus β(BC)timesz1113872 11138731113960 1113961 times e
β(DC)timesz
(9)
We need to estimate the reasonable range of valuesrange [Jcminprime Jcmaxprime ] of RGB intensity of each pixel in theforeground (that is ROI) ampe color fidelity metric definedby (10) is calculated by the out-of-range part of the
6 Scientific Programming
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
save workforce and resources a reliable underwater ob-jective image quality metric needs to be designed In un-derwater image processing scenarios an ideal referenceimage is usually not available so no-reference (NR) IQA isthe best choice for evaluating underwater image quality
Some generic image quality measures such as histogramanalysis [2] the Variance [3] Image Entropy [4] and colorimage quality measure (CIQI) [5] have been developedPopular BRISQUE [6] and LPSI [7] have also been proposedwhich summarize the statistical rules of natural images andcalculate the degree of deviation of distorted imagesHowever these objective measures are not designed spe-cifically for underwater images ampey fail to consider thestrong absorption and scattering effects of the water andthey are not applicable to underwater images ampere are alsoNR IQA based on in-depth learning such as DIQA [8] DeepIQA [9] and RankIQA [10] which perform well in airimages ampe deep learning method has a strong learningability and can automatically extract image features How-ever the method requires a large amount of data (usuallymore than 5000 images) for training and the acquisition ofsubjective scores (as the ground truth during training) isexpensive and time-consuming Currently there is no rel-evant dataset available for training in the underwater imagefield so it is temporarily impossible to design NR-IQA basedon deep learning for underwater images
Some paper [11] pointed out that the overall quality of animage can be effectively obtained by combinations of imageattribute measures At present the most commonly usedunderwater image quality measures UCIQE [12] UIQM[11] and CCF [13] are designed based on this principle ampeUCIQE [12] proposed by Yang Miao is a linear combinationof the standard deviation of chroma the contrast of lumi-nance and the average of saturation Karen Panettarsquos UIQM[11] is a linear combination of colorfulness sharpness andcontrast ampe CCF proposed by Yan Wang et al [13] startsfrom the imaging analysis of underwater absorption andscattering characteristics calculates the fog density indexand evaluates the underwater image quality by combining
the color index contrast index and fog density index ampeyall determine the weighting coefficients by multivariatelinear regression from the training image set Howeverregardless of the performance of the training set the gen-eralization ability of these methods is largely limited by thetraining image samples At the same time the attentionmechanism of the human visual system (HVS) [14] has notbeen paid enough attention in the underwater image eval-uation In underwater scenes the image quality of the targetobject has higher research value and practical significancethan that of the ocean background which does not belong tothe region of interest (ROI) Moreover the three commonlyused underwater metrics measure the image quality from theperspective of image statistics and the robustness is nothigh ampis results in an overemphasis on color richness ampispaper holds that in addition to the color fidelity in thestatistical sense the color fidelity of objects is also veryimportant from the perspective of pixels It is worth notingthat the color fidelity here refers to whether the image coloris reasonable not to say the difference between the objectcolor in the image and the real object color Most of theunderwater natural images are blue-green due to color se-lective attenuation and the color is single and the colorrichness is not ideal (as shown in Figures 2(a)ndash2(c)) Afterthe enhancement algorithm processing the underwaterimage can generally eliminate the color attenuation from thevision and the color richness is greatly improved as shownin Figures 2(d)ndash2(f ) but the color fidelity of the enhancedimage is questionable and the color of the fish inFigures 2(d) and 2(e) is not reasonable and the artificialfacilities in Figure 2(f ) are obviously different from what weknow ampat is Figures 2(a)ndash2(c) have high color fidelity(because they are real natural images although the color ofobjects in them is different from that of real objects the pixelcolor is reasonable) but the color richness is very lowFigures 2(d)ndash2(f ) have low color fidelity and high colorrichness ampat is the underwater absorption and scatteringcharacteristics cause image color distortion (where the typeof distortion is the large difference between the object color
(a) (b)
Figure 1 (a) Images that do not need to be enhanced (b) Images that need to be enhanced
2 Scientific Programming
in the image and the color of the real object) the image tendsto be blue-green and the color richness of the image isreduced Overemphasis on color richness can also result incolor distortion (in this case the kind of distortion refers tothe unreasonable color in the image) which affects theviewing effect of the image and subsequent use of the image
In view of the shortcomings of existing metrics this paperproposes a no-reference image quality predictorNIPQNIPQ isdesigned with a three-stage framework ampe first stage focuseson the attentionmechanism of the human visual system (HVS)which can also be interpreted as ROI Because in the IQA fieldour ROIs are not fixed some are task-driven some are data-driven some such as fish corals divers or even artifacts ofunknown shapemay be of interest to us Formore applicationswe interpret the foreground area (nonocean background) asROIampis paper extracts ROI based on backgroundforegroundand focuses on the image quality of ROI ampe second stageconsiders the impact of color richness on image quality As thedistance between the camera and the object increases (hori-zontally) the color of the object in the underwater image willkeep approaching blue and green [1] At the same time as theposition of the optical sensor gets deeper the object will befarther away from the sunlight source and the color of theobject will be darker and the contrast will be lower It can beunderstood that if the ROI of a natural underwater image hasgood color richness its image quality will be significantly betterthan that of a low color richness image ampe third stage
considers the fidelity of the color As mentioned earlier if NRIQA overemphasizes color richness it will cause the enhancedunderwater image to become oversaturated which is also aform of color distortion Inspired by the underwater imageformation model we distinguish the water types (yellow watergreen water and blue water) in the image by the oceanbackground area of the image and estimate the reasonablerange of pixel intensity of ROI in the enhanced underwaterimage from the perspective of pixels In this stage the differencebetween the reasonable range of pixel intensity and the ROIpixel intensity of the actual enhanced image is used to representthe rationality of the enhanced image that is color fidelityFinally in the fourth stage color richness and color fidelity aresystematically integrated for quality prediction
In order to measure the performance of NIPQ a un-derwater optical image quality database (UOQ) is estab-lished ampe database contains some typical underwaterimages and their mean opinion scores (MOS) Based on thecomprehensive analysis of all experimental results thecontribution of NIPQ proposed in this paper is summarizedas follows
(a) It is a kind of NR IQA inspired by underwaterimaging characteristics By considering the colorattenuation of images in different water bodies thecolor fidelity and color richness metrics areproposed
(a) (b) (c)
(d) (e) (f )
Figure 2 (a) (b) and (c) are original images (d) (e) and (f) are images after enhancement algorithm Although the color attenuation iseliminated visually the color is still distorted and it is oversaturated
Scientific Programming 3
(b) By adopting a suitable ROI extractionmethod for theunderwater IQA field ROI and IQA are effectivelycombined due to the block strategy in the ROI ex-traction method
(c) It is superior to many commonly used IQA metricsand can effectively evaluate the performance of theimage enhancement algorithm and can be used asquality supervision
(d) We propose NR IQA-based underwater smart imagedisplay module which embodies the role of our IQAin application
We arrange the reminder of this paper as follows Section 2describes the NR IQA-based underwater smart image displaymodule which is the application background ofNR IQA Section3 describes the detail of our NIPQ metric explicitly Section 4describes the establishment of our database UOQ for evaluatingIQA performance which consists of underwater optical imagesand their enhanced images In Section 5 performance com-parisons of the NIPQ metric with selected existing NR IQAmethods are performed We conclude this paper in Section 6
2 NR IQA-Based Underwater Smart ImageDisplay Module
Most of the underwater images captured by optical sensorshave practical applications For underwater images withsevere color distortion image enhancement is often neededbefore the terminal display However not all underwaterimages need to be enhanced We believe that whether theimage needs to be enhanced or not depends on the visibilityof the underwater objects Besides because no image en-hancement algorithm can achieve good results in all scenesNR IQA can be used as a guide for image enhancement sothat the system can automatically select more appropriateimage enhancement algorithm in real time From the ap-plication point of view the framework of the NR IQA baseddisplay module is shown in Figure 3 ampe traditional imagedisplay module only provides a single image enhancementscheme or displays the image directly which cannot flexiblycope with the different water environment ampe displaymodule proposed in this paper builds various image en-hancement algorithms into the Image Enhancement Algo-rithm Database ampe system can choose a more appropriateenhancement algorithm according to the results of NR IQAFirstly the input underwater image is preassessed and thenatural image with less color distortion is directly displayedAnd for the natural image with severe color distortion thedefault enhancement algorithm in the Image EnhancementAlgorithm Database is used to enhance the image ampeSelector automatically determines whether to enable thealternative image enhancement scheme and which alter-native scheme to choose according to the results of NR IQA
According to the above analysis of NR IQA-based un-derwater smart image display module and the considerationof the characteristics of underwater image in Section 1 thispaper uses the color richness of ROI and color fidelity of ROIto estimate image quality In the display module proposed inour paper the color richness of ROI is used as the metric of
pre-NR IQA in the display module and the NIPQ whichcombines ROI color richness and color fidelity will be usedas the metric of NR IQA in the display module
ampe ROI extraction method is based on backgroundforeground ampe block strategy in the extraction methodhelps ROI and IQA better combine ampe ROI extractionmethod is introduced in Section 31 in detail ampe colorrichness represents the distribution of image color in astatistical sense which is described by the spatial charac-teristics of image in CIE XYZ space and detailed in Section32 ampe color fidelity is based on the underwater imageformation model in the sense of pixel which is used todescribe whether the pixel intensity is within a reasonablerange It is introduced in Section 33 in detail
3 Proposed NIPQ Metric
31 ROI Extraction Based on BackgroundForegroundConsidering that the final receiver of display module of ROV isoften human it is particularly important that IQA can reflectthe feeling of human eyes well ampe mechanism of humanvisual attention is an important feature of HVS ampe mecha-nism of human visual attention enables the brain to quicklyunderstand the overall information of the image and obtain theregions that need attention ampen the brain begins to focus onthe target information and suppress other background infor-mation amperefore the human eye is usually sensitive to thedamage of the area of concern At the same time comparedwith the ocean background high-quality ROI has betterpractical significance and value amperefore it is necessary tointroduce ROI into image quality assessment
Researchers usually get ROI by saliency detection orobject detection [15 16] Different from the image in the airthe contrast of most underwater images is low and thetraditional significance detection method in the air is notapplicable in the underwater At present there is no robustsaliency detection algorithm in underwater image fieldSome researchers combine image enhancement with sa-liency detection [17] Some researchers combine fish lo-calization and saliency detection [18] In IQA the purpose ofthe metric is to evaluate the enhancement algorithm and theROI of the underwater image is not always one or severalfixed categories of targets with predictable shapesampereforethe above method is not applicable Considering thatcompared with the target the underwater backgroundfeatures are easier to be recognized this paper extracts ROIbased on backgroundforeground and the process is shownin Figure 4 In order to better combine ROI and IQA in thenext steps the preprocessed underwater image is dividedinto m times n image blocks (we call it the block strategy)
ampen we map the boundary connectivity BndCon(pi)
(definition in [19]) of block i region pi by (1) and obtain thebackground region probability w
bgi
wbgi 1 minus exp 1 minus
BndCon2 pi( 1113857
2σ2BndCon1113888 1113889 (1)
where σ2BndCon 1 ampe background block the target blockand the uncertain block are initially divided by using w
bgi
4 Scientific Programming
thresholdbg and thresholdroi ampen the color feature andspatial position feature of the block are used as the correctionof the uncertain block to help judge the background
probability of the uncertain block which is expressed by amathematical formula as shown in the following equation
WBG(p)
1 wbgi gt thresholdbg
1 minus 1113936N
iminus1dapp p pi( 1113857wspa p pi( 1113857w
bgi thresholdroi ltw
bgi lt thresholdbg 0 w
bgi lt thresholdroi
⎧⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
(2)
wspa p pi( 1113857 exp minusdspa p pi( 1113857
2σ2spa⎛⎝ ⎞⎠ (3)
(1) divide the image into blocksand estimate the background
probability of each block
(3) estimate the background probabilityof uncretain blocks by color and
position
(4) the final ROI is obtained bybinarization
(2) the image is preliminarily divided into background blocks target blocks and uncertain blocks
Background block
Uncertain block
Target blockSpatial location feature
Color feature
Figure 4 ROI extraction process
Image input
Image display
Imagecontent
Imageenhancement
Aboveenhancement-threshold
Belowenhancement-threshold
Pre-NR IQA
Image enhancement algorithmdatabase
Image enhancement algorithmselector
NR IQA
Figure 3 ampe framework of the NR IQA-based underwater smart display module
Scientific Programming 5
where dapp(p q) is the color similarity between blocks p andq which is calculated by the Euclidean distance between theaverage colors of blocks p and q dspa(p q) is the Euclideandistance between blocks p and qwspa(p q) is obtained aftermapping according to (3) among which σ2spa 025 Fi-nally we use the method of maximum variance betweenclasses to get the final ROI
32 Color Richness With the aggravation of the phenom-enon of color attenuation the color of the natural under-water image will become less and less and the visibility of theobject will become worse amperefore the color richness ofROI is a simple and fast metric to measure whether the colordistortion of natural underwater image is serious which issuitable for the evaluation of image quality
In this paper the richness of color is measured by thespatial characteristics of color in CIE XYZ color space ampecolor richness should not only include color diversity butalso consider the lightness distribution soXYZ color space isa good choice CIE XYZ color space can represent all colorsand the Y parameter is the measurement of color lightnessAccording to the XYZ color space distribution of the twoimages shown in Figure 5 the wide distribution of imagecolor respectively in the three dimensions of X Y and Zdoes not mean that the color richness is good ampat is be-cause the three components of X Y and Z have a certaincorrelation So the spatial characteristics of color can betterrepresent the distribution of color According to (4) theimage color divergence in XYZ color space is defined todetermine the color richness of the image
Cd 1113944mn
dis Pmn min Pmn max( 1113857 times max dis Pmn(i j) Pmn min Pmn max1113858 1113859( 1113857 times12 (4)
where dis represents the shortest distance between twopoints or between points and lines and mn belongs to X-YY-Z and X-Z sectionsPmn min and Pmn max represent theclosest and farthest points from the origin respectively
33 Color Fidelity As mentioned in Section 1 the enhancedimage may be oversaturatedpseudobright (as shown inFigure 2) If too much attention is paid to the color richnessthe color of ROI in the image will deviate from the color ofreal objectsamperefore we should not only consider the colorrichness of the enhanced underwater image but also con-sider the color fidelity of ROI that is whether the intensityof pixels is within a reasonable range
It is necessary to understand the formation and deg-radation of underwater images if we want to estimate areasonable range of intensity of pixels ampe formation of theunderwater image is dominated by the following factors[1 20 21]
Ic JCeminus β(DC)timesz
+ Binfinc 1 minus e
minus β(BC)timesz1113872 1113873 (5)
where C R G B is the color channel Ic is the underwaterimage captured by the camera and JCeminus β(DC)timesz is the directsignal recorded as Dc Binfinc (1 minus eminus β(BC)timesz) is a backscatteredsignal which is recorded as Bc z is the distance between thecamera and the photographed object Binfinc is the obscuredlight JC is the unattenuated scene that is the RGB intensityof the surface captured by the sensor with the spectral re-sponse Sc(λ) at the distance z0 0 (generally z0 is regardedas 0)
JC 1kc
1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)dλ (6)
kc is the camerarsquos scaling constant βDc and βB
c have acertain dependence on the spectrum of distance z reflec-tivity ρ(λ) ambient light E(d λ) camerarsquos spectral responseSc(λ) scattering coefficient b(λ) and beam attenuationcoefficient β(λ) as shown in (7) and (8) z0 and (z0 + z) arethe starting and ending points along the line of sight
βDc
ln 1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)z0dλ1113946
λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)(z0+z)dλ1113888 11138891113890 1113891
z
(7)
βBc minus
ln 1113946λ2
λ1Sc(λ)B
infin(λ)e
minus β(λ)zdλ1113946
λ2
λ1Sc(λ)B
infin(λ)dλ1113888 11138891113890 1113891
z
(8)
So we can calculate the unattenuated scene JC as
JC Dceβ(DC)timesz
Ic minus Binfinc 1 minus e
minus β(BC)timesz1113872 11138731113960 1113961 times e
β(DC)timesz
(9)
We need to estimate the reasonable range of valuesrange [Jcminprime Jcmaxprime ] of RGB intensity of each pixel in theforeground (that is ROI) ampe color fidelity metric definedby (10) is calculated by the out-of-range part of the
6 Scientific Programming
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
in the image and the color of the real object) the image tendsto be blue-green and the color richness of the image isreduced Overemphasis on color richness can also result incolor distortion (in this case the kind of distortion refers tothe unreasonable color in the image) which affects theviewing effect of the image and subsequent use of the image
In view of the shortcomings of existing metrics this paperproposes a no-reference image quality predictorNIPQNIPQ isdesigned with a three-stage framework ampe first stage focuseson the attentionmechanism of the human visual system (HVS)which can also be interpreted as ROI Because in the IQA fieldour ROIs are not fixed some are task-driven some are data-driven some such as fish corals divers or even artifacts ofunknown shapemay be of interest to us Formore applicationswe interpret the foreground area (nonocean background) asROIampis paper extracts ROI based on backgroundforegroundand focuses on the image quality of ROI ampe second stageconsiders the impact of color richness on image quality As thedistance between the camera and the object increases (hori-zontally) the color of the object in the underwater image willkeep approaching blue and green [1] At the same time as theposition of the optical sensor gets deeper the object will befarther away from the sunlight source and the color of theobject will be darker and the contrast will be lower It can beunderstood that if the ROI of a natural underwater image hasgood color richness its image quality will be significantly betterthan that of a low color richness image ampe third stage
considers the fidelity of the color As mentioned earlier if NRIQA overemphasizes color richness it will cause the enhancedunderwater image to become oversaturated which is also aform of color distortion Inspired by the underwater imageformation model we distinguish the water types (yellow watergreen water and blue water) in the image by the oceanbackground area of the image and estimate the reasonablerange of pixel intensity of ROI in the enhanced underwaterimage from the perspective of pixels In this stage the differencebetween the reasonable range of pixel intensity and the ROIpixel intensity of the actual enhanced image is used to representthe rationality of the enhanced image that is color fidelityFinally in the fourth stage color richness and color fidelity aresystematically integrated for quality prediction
In order to measure the performance of NIPQ a un-derwater optical image quality database (UOQ) is estab-lished ampe database contains some typical underwaterimages and their mean opinion scores (MOS) Based on thecomprehensive analysis of all experimental results thecontribution of NIPQ proposed in this paper is summarizedas follows
(a) It is a kind of NR IQA inspired by underwaterimaging characteristics By considering the colorattenuation of images in different water bodies thecolor fidelity and color richness metrics areproposed
(a) (b) (c)
(d) (e) (f )
Figure 2 (a) (b) and (c) are original images (d) (e) and (f) are images after enhancement algorithm Although the color attenuation iseliminated visually the color is still distorted and it is oversaturated
Scientific Programming 3
(b) By adopting a suitable ROI extractionmethod for theunderwater IQA field ROI and IQA are effectivelycombined due to the block strategy in the ROI ex-traction method
(c) It is superior to many commonly used IQA metricsand can effectively evaluate the performance of theimage enhancement algorithm and can be used asquality supervision
(d) We propose NR IQA-based underwater smart imagedisplay module which embodies the role of our IQAin application
We arrange the reminder of this paper as follows Section 2describes the NR IQA-based underwater smart image displaymodule which is the application background ofNR IQA Section3 describes the detail of our NIPQ metric explicitly Section 4describes the establishment of our database UOQ for evaluatingIQA performance which consists of underwater optical imagesand their enhanced images In Section 5 performance com-parisons of the NIPQ metric with selected existing NR IQAmethods are performed We conclude this paper in Section 6
2 NR IQA-Based Underwater Smart ImageDisplay Module
Most of the underwater images captured by optical sensorshave practical applications For underwater images withsevere color distortion image enhancement is often neededbefore the terminal display However not all underwaterimages need to be enhanced We believe that whether theimage needs to be enhanced or not depends on the visibilityof the underwater objects Besides because no image en-hancement algorithm can achieve good results in all scenesNR IQA can be used as a guide for image enhancement sothat the system can automatically select more appropriateimage enhancement algorithm in real time From the ap-plication point of view the framework of the NR IQA baseddisplay module is shown in Figure 3 ampe traditional imagedisplay module only provides a single image enhancementscheme or displays the image directly which cannot flexiblycope with the different water environment ampe displaymodule proposed in this paper builds various image en-hancement algorithms into the Image Enhancement Algo-rithm Database ampe system can choose a more appropriateenhancement algorithm according to the results of NR IQAFirstly the input underwater image is preassessed and thenatural image with less color distortion is directly displayedAnd for the natural image with severe color distortion thedefault enhancement algorithm in the Image EnhancementAlgorithm Database is used to enhance the image ampeSelector automatically determines whether to enable thealternative image enhancement scheme and which alter-native scheme to choose according to the results of NR IQA
According to the above analysis of NR IQA-based un-derwater smart image display module and the considerationof the characteristics of underwater image in Section 1 thispaper uses the color richness of ROI and color fidelity of ROIto estimate image quality In the display module proposed inour paper the color richness of ROI is used as the metric of
pre-NR IQA in the display module and the NIPQ whichcombines ROI color richness and color fidelity will be usedas the metric of NR IQA in the display module
ampe ROI extraction method is based on backgroundforeground ampe block strategy in the extraction methodhelps ROI and IQA better combine ampe ROI extractionmethod is introduced in Section 31 in detail ampe colorrichness represents the distribution of image color in astatistical sense which is described by the spatial charac-teristics of image in CIE XYZ space and detailed in Section32 ampe color fidelity is based on the underwater imageformation model in the sense of pixel which is used todescribe whether the pixel intensity is within a reasonablerange It is introduced in Section 33 in detail
3 Proposed NIPQ Metric
31 ROI Extraction Based on BackgroundForegroundConsidering that the final receiver of display module of ROV isoften human it is particularly important that IQA can reflectthe feeling of human eyes well ampe mechanism of humanvisual attention is an important feature of HVS ampe mecha-nism of human visual attention enables the brain to quicklyunderstand the overall information of the image and obtain theregions that need attention ampen the brain begins to focus onthe target information and suppress other background infor-mation amperefore the human eye is usually sensitive to thedamage of the area of concern At the same time comparedwith the ocean background high-quality ROI has betterpractical significance and value amperefore it is necessary tointroduce ROI into image quality assessment
Researchers usually get ROI by saliency detection orobject detection [15 16] Different from the image in the airthe contrast of most underwater images is low and thetraditional significance detection method in the air is notapplicable in the underwater At present there is no robustsaliency detection algorithm in underwater image fieldSome researchers combine image enhancement with sa-liency detection [17] Some researchers combine fish lo-calization and saliency detection [18] In IQA the purpose ofthe metric is to evaluate the enhancement algorithm and theROI of the underwater image is not always one or severalfixed categories of targets with predictable shapesampereforethe above method is not applicable Considering thatcompared with the target the underwater backgroundfeatures are easier to be recognized this paper extracts ROIbased on backgroundforeground and the process is shownin Figure 4 In order to better combine ROI and IQA in thenext steps the preprocessed underwater image is dividedinto m times n image blocks (we call it the block strategy)
ampen we map the boundary connectivity BndCon(pi)
(definition in [19]) of block i region pi by (1) and obtain thebackground region probability w
bgi
wbgi 1 minus exp 1 minus
BndCon2 pi( 1113857
2σ2BndCon1113888 1113889 (1)
where σ2BndCon 1 ampe background block the target blockand the uncertain block are initially divided by using w
bgi
4 Scientific Programming
thresholdbg and thresholdroi ampen the color feature andspatial position feature of the block are used as the correctionof the uncertain block to help judge the background
probability of the uncertain block which is expressed by amathematical formula as shown in the following equation
WBG(p)
1 wbgi gt thresholdbg
1 minus 1113936N
iminus1dapp p pi( 1113857wspa p pi( 1113857w
bgi thresholdroi ltw
bgi lt thresholdbg 0 w
bgi lt thresholdroi
⎧⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
(2)
wspa p pi( 1113857 exp minusdspa p pi( 1113857
2σ2spa⎛⎝ ⎞⎠ (3)
(1) divide the image into blocksand estimate the background
probability of each block
(3) estimate the background probabilityof uncretain blocks by color and
position
(4) the final ROI is obtained bybinarization
(2) the image is preliminarily divided into background blocks target blocks and uncertain blocks
Background block
Uncertain block
Target blockSpatial location feature
Color feature
Figure 4 ROI extraction process
Image input
Image display
Imagecontent
Imageenhancement
Aboveenhancement-threshold
Belowenhancement-threshold
Pre-NR IQA
Image enhancement algorithmdatabase
Image enhancement algorithmselector
NR IQA
Figure 3 ampe framework of the NR IQA-based underwater smart display module
Scientific Programming 5
where dapp(p q) is the color similarity between blocks p andq which is calculated by the Euclidean distance between theaverage colors of blocks p and q dspa(p q) is the Euclideandistance between blocks p and qwspa(p q) is obtained aftermapping according to (3) among which σ2spa 025 Fi-nally we use the method of maximum variance betweenclasses to get the final ROI
32 Color Richness With the aggravation of the phenom-enon of color attenuation the color of the natural under-water image will become less and less and the visibility of theobject will become worse amperefore the color richness ofROI is a simple and fast metric to measure whether the colordistortion of natural underwater image is serious which issuitable for the evaluation of image quality
In this paper the richness of color is measured by thespatial characteristics of color in CIE XYZ color space ampecolor richness should not only include color diversity butalso consider the lightness distribution soXYZ color space isa good choice CIE XYZ color space can represent all colorsand the Y parameter is the measurement of color lightnessAccording to the XYZ color space distribution of the twoimages shown in Figure 5 the wide distribution of imagecolor respectively in the three dimensions of X Y and Zdoes not mean that the color richness is good ampat is be-cause the three components of X Y and Z have a certaincorrelation So the spatial characteristics of color can betterrepresent the distribution of color According to (4) theimage color divergence in XYZ color space is defined todetermine the color richness of the image
Cd 1113944mn
dis Pmn min Pmn max( 1113857 times max dis Pmn(i j) Pmn min Pmn max1113858 1113859( 1113857 times12 (4)
where dis represents the shortest distance between twopoints or between points and lines and mn belongs to X-YY-Z and X-Z sectionsPmn min and Pmn max represent theclosest and farthest points from the origin respectively
33 Color Fidelity As mentioned in Section 1 the enhancedimage may be oversaturatedpseudobright (as shown inFigure 2) If too much attention is paid to the color richnessthe color of ROI in the image will deviate from the color ofreal objectsamperefore we should not only consider the colorrichness of the enhanced underwater image but also con-sider the color fidelity of ROI that is whether the intensityof pixels is within a reasonable range
It is necessary to understand the formation and deg-radation of underwater images if we want to estimate areasonable range of intensity of pixels ampe formation of theunderwater image is dominated by the following factors[1 20 21]
Ic JCeminus β(DC)timesz
+ Binfinc 1 minus e
minus β(BC)timesz1113872 1113873 (5)
where C R G B is the color channel Ic is the underwaterimage captured by the camera and JCeminus β(DC)timesz is the directsignal recorded as Dc Binfinc (1 minus eminus β(BC)timesz) is a backscatteredsignal which is recorded as Bc z is the distance between thecamera and the photographed object Binfinc is the obscuredlight JC is the unattenuated scene that is the RGB intensityof the surface captured by the sensor with the spectral re-sponse Sc(λ) at the distance z0 0 (generally z0 is regardedas 0)
JC 1kc
1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)dλ (6)
kc is the camerarsquos scaling constant βDc and βB
c have acertain dependence on the spectrum of distance z reflec-tivity ρ(λ) ambient light E(d λ) camerarsquos spectral responseSc(λ) scattering coefficient b(λ) and beam attenuationcoefficient β(λ) as shown in (7) and (8) z0 and (z0 + z) arethe starting and ending points along the line of sight
βDc
ln 1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)z0dλ1113946
λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)(z0+z)dλ1113888 11138891113890 1113891
z
(7)
βBc minus
ln 1113946λ2
λ1Sc(λ)B
infin(λ)e
minus β(λ)zdλ1113946
λ2
λ1Sc(λ)B
infin(λ)dλ1113888 11138891113890 1113891
z
(8)
So we can calculate the unattenuated scene JC as
JC Dceβ(DC)timesz
Ic minus Binfinc 1 minus e
minus β(BC)timesz1113872 11138731113960 1113961 times e
β(DC)timesz
(9)
We need to estimate the reasonable range of valuesrange [Jcminprime Jcmaxprime ] of RGB intensity of each pixel in theforeground (that is ROI) ampe color fidelity metric definedby (10) is calculated by the out-of-range part of the
6 Scientific Programming
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
(b) By adopting a suitable ROI extractionmethod for theunderwater IQA field ROI and IQA are effectivelycombined due to the block strategy in the ROI ex-traction method
(c) It is superior to many commonly used IQA metricsand can effectively evaluate the performance of theimage enhancement algorithm and can be used asquality supervision
(d) We propose NR IQA-based underwater smart imagedisplay module which embodies the role of our IQAin application
We arrange the reminder of this paper as follows Section 2describes the NR IQA-based underwater smart image displaymodule which is the application background ofNR IQA Section3 describes the detail of our NIPQ metric explicitly Section 4describes the establishment of our database UOQ for evaluatingIQA performance which consists of underwater optical imagesand their enhanced images In Section 5 performance com-parisons of the NIPQ metric with selected existing NR IQAmethods are performed We conclude this paper in Section 6
2 NR IQA-Based Underwater Smart ImageDisplay Module
Most of the underwater images captured by optical sensorshave practical applications For underwater images withsevere color distortion image enhancement is often neededbefore the terminal display However not all underwaterimages need to be enhanced We believe that whether theimage needs to be enhanced or not depends on the visibilityof the underwater objects Besides because no image en-hancement algorithm can achieve good results in all scenesNR IQA can be used as a guide for image enhancement sothat the system can automatically select more appropriateimage enhancement algorithm in real time From the ap-plication point of view the framework of the NR IQA baseddisplay module is shown in Figure 3 ampe traditional imagedisplay module only provides a single image enhancementscheme or displays the image directly which cannot flexiblycope with the different water environment ampe displaymodule proposed in this paper builds various image en-hancement algorithms into the Image Enhancement Algo-rithm Database ampe system can choose a more appropriateenhancement algorithm according to the results of NR IQAFirstly the input underwater image is preassessed and thenatural image with less color distortion is directly displayedAnd for the natural image with severe color distortion thedefault enhancement algorithm in the Image EnhancementAlgorithm Database is used to enhance the image ampeSelector automatically determines whether to enable thealternative image enhancement scheme and which alter-native scheme to choose according to the results of NR IQA
According to the above analysis of NR IQA-based un-derwater smart image display module and the considerationof the characteristics of underwater image in Section 1 thispaper uses the color richness of ROI and color fidelity of ROIto estimate image quality In the display module proposed inour paper the color richness of ROI is used as the metric of
pre-NR IQA in the display module and the NIPQ whichcombines ROI color richness and color fidelity will be usedas the metric of NR IQA in the display module
ampe ROI extraction method is based on backgroundforeground ampe block strategy in the extraction methodhelps ROI and IQA better combine ampe ROI extractionmethod is introduced in Section 31 in detail ampe colorrichness represents the distribution of image color in astatistical sense which is described by the spatial charac-teristics of image in CIE XYZ space and detailed in Section32 ampe color fidelity is based on the underwater imageformation model in the sense of pixel which is used todescribe whether the pixel intensity is within a reasonablerange It is introduced in Section 33 in detail
3 Proposed NIPQ Metric
31 ROI Extraction Based on BackgroundForegroundConsidering that the final receiver of display module of ROV isoften human it is particularly important that IQA can reflectthe feeling of human eyes well ampe mechanism of humanvisual attention is an important feature of HVS ampe mecha-nism of human visual attention enables the brain to quicklyunderstand the overall information of the image and obtain theregions that need attention ampen the brain begins to focus onthe target information and suppress other background infor-mation amperefore the human eye is usually sensitive to thedamage of the area of concern At the same time comparedwith the ocean background high-quality ROI has betterpractical significance and value amperefore it is necessary tointroduce ROI into image quality assessment
Researchers usually get ROI by saliency detection orobject detection [15 16] Different from the image in the airthe contrast of most underwater images is low and thetraditional significance detection method in the air is notapplicable in the underwater At present there is no robustsaliency detection algorithm in underwater image fieldSome researchers combine image enhancement with sa-liency detection [17] Some researchers combine fish lo-calization and saliency detection [18] In IQA the purpose ofthe metric is to evaluate the enhancement algorithm and theROI of the underwater image is not always one or severalfixed categories of targets with predictable shapesampereforethe above method is not applicable Considering thatcompared with the target the underwater backgroundfeatures are easier to be recognized this paper extracts ROIbased on backgroundforeground and the process is shownin Figure 4 In order to better combine ROI and IQA in thenext steps the preprocessed underwater image is dividedinto m times n image blocks (we call it the block strategy)
ampen we map the boundary connectivity BndCon(pi)
(definition in [19]) of block i region pi by (1) and obtain thebackground region probability w
bgi
wbgi 1 minus exp 1 minus
BndCon2 pi( 1113857
2σ2BndCon1113888 1113889 (1)
where σ2BndCon 1 ampe background block the target blockand the uncertain block are initially divided by using w
bgi
4 Scientific Programming
thresholdbg and thresholdroi ampen the color feature andspatial position feature of the block are used as the correctionof the uncertain block to help judge the background
probability of the uncertain block which is expressed by amathematical formula as shown in the following equation
WBG(p)
1 wbgi gt thresholdbg
1 minus 1113936N
iminus1dapp p pi( 1113857wspa p pi( 1113857w
bgi thresholdroi ltw
bgi lt thresholdbg 0 w
bgi lt thresholdroi
⎧⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
(2)
wspa p pi( 1113857 exp minusdspa p pi( 1113857
2σ2spa⎛⎝ ⎞⎠ (3)
(1) divide the image into blocksand estimate the background
probability of each block
(3) estimate the background probabilityof uncretain blocks by color and
position
(4) the final ROI is obtained bybinarization
(2) the image is preliminarily divided into background blocks target blocks and uncertain blocks
Background block
Uncertain block
Target blockSpatial location feature
Color feature
Figure 4 ROI extraction process
Image input
Image display
Imagecontent
Imageenhancement
Aboveenhancement-threshold
Belowenhancement-threshold
Pre-NR IQA
Image enhancement algorithmdatabase
Image enhancement algorithmselector
NR IQA
Figure 3 ampe framework of the NR IQA-based underwater smart display module
Scientific Programming 5
where dapp(p q) is the color similarity between blocks p andq which is calculated by the Euclidean distance between theaverage colors of blocks p and q dspa(p q) is the Euclideandistance between blocks p and qwspa(p q) is obtained aftermapping according to (3) among which σ2spa 025 Fi-nally we use the method of maximum variance betweenclasses to get the final ROI
32 Color Richness With the aggravation of the phenom-enon of color attenuation the color of the natural under-water image will become less and less and the visibility of theobject will become worse amperefore the color richness ofROI is a simple and fast metric to measure whether the colordistortion of natural underwater image is serious which issuitable for the evaluation of image quality
In this paper the richness of color is measured by thespatial characteristics of color in CIE XYZ color space ampecolor richness should not only include color diversity butalso consider the lightness distribution soXYZ color space isa good choice CIE XYZ color space can represent all colorsand the Y parameter is the measurement of color lightnessAccording to the XYZ color space distribution of the twoimages shown in Figure 5 the wide distribution of imagecolor respectively in the three dimensions of X Y and Zdoes not mean that the color richness is good ampat is be-cause the three components of X Y and Z have a certaincorrelation So the spatial characteristics of color can betterrepresent the distribution of color According to (4) theimage color divergence in XYZ color space is defined todetermine the color richness of the image
Cd 1113944mn
dis Pmn min Pmn max( 1113857 times max dis Pmn(i j) Pmn min Pmn max1113858 1113859( 1113857 times12 (4)
where dis represents the shortest distance between twopoints or between points and lines and mn belongs to X-YY-Z and X-Z sectionsPmn min and Pmn max represent theclosest and farthest points from the origin respectively
33 Color Fidelity As mentioned in Section 1 the enhancedimage may be oversaturatedpseudobright (as shown inFigure 2) If too much attention is paid to the color richnessthe color of ROI in the image will deviate from the color ofreal objectsamperefore we should not only consider the colorrichness of the enhanced underwater image but also con-sider the color fidelity of ROI that is whether the intensityof pixels is within a reasonable range
It is necessary to understand the formation and deg-radation of underwater images if we want to estimate areasonable range of intensity of pixels ampe formation of theunderwater image is dominated by the following factors[1 20 21]
Ic JCeminus β(DC)timesz
+ Binfinc 1 minus e
minus β(BC)timesz1113872 1113873 (5)
where C R G B is the color channel Ic is the underwaterimage captured by the camera and JCeminus β(DC)timesz is the directsignal recorded as Dc Binfinc (1 minus eminus β(BC)timesz) is a backscatteredsignal which is recorded as Bc z is the distance between thecamera and the photographed object Binfinc is the obscuredlight JC is the unattenuated scene that is the RGB intensityof the surface captured by the sensor with the spectral re-sponse Sc(λ) at the distance z0 0 (generally z0 is regardedas 0)
JC 1kc
1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)dλ (6)
kc is the camerarsquos scaling constant βDc and βB
c have acertain dependence on the spectrum of distance z reflec-tivity ρ(λ) ambient light E(d λ) camerarsquos spectral responseSc(λ) scattering coefficient b(λ) and beam attenuationcoefficient β(λ) as shown in (7) and (8) z0 and (z0 + z) arethe starting and ending points along the line of sight
βDc
ln 1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)z0dλ1113946
λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)(z0+z)dλ1113888 11138891113890 1113891
z
(7)
βBc minus
ln 1113946λ2
λ1Sc(λ)B
infin(λ)e
minus β(λ)zdλ1113946
λ2
λ1Sc(λ)B
infin(λ)dλ1113888 11138891113890 1113891
z
(8)
So we can calculate the unattenuated scene JC as
JC Dceβ(DC)timesz
Ic minus Binfinc 1 minus e
minus β(BC)timesz1113872 11138731113960 1113961 times e
β(DC)timesz
(9)
We need to estimate the reasonable range of valuesrange [Jcminprime Jcmaxprime ] of RGB intensity of each pixel in theforeground (that is ROI) ampe color fidelity metric definedby (10) is calculated by the out-of-range part of the
6 Scientific Programming
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
thresholdbg and thresholdroi ampen the color feature andspatial position feature of the block are used as the correctionof the uncertain block to help judge the background
probability of the uncertain block which is expressed by amathematical formula as shown in the following equation
WBG(p)
1 wbgi gt thresholdbg
1 minus 1113936N
iminus1dapp p pi( 1113857wspa p pi( 1113857w
bgi thresholdroi ltw
bgi lt thresholdbg 0 w
bgi lt thresholdroi
⎧⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎩
(2)
wspa p pi( 1113857 exp minusdspa p pi( 1113857
2σ2spa⎛⎝ ⎞⎠ (3)
(1) divide the image into blocksand estimate the background
probability of each block
(3) estimate the background probabilityof uncretain blocks by color and
position
(4) the final ROI is obtained bybinarization
(2) the image is preliminarily divided into background blocks target blocks and uncertain blocks
Background block
Uncertain block
Target blockSpatial location feature
Color feature
Figure 4 ROI extraction process
Image input
Image display
Imagecontent
Imageenhancement
Aboveenhancement-threshold
Belowenhancement-threshold
Pre-NR IQA
Image enhancement algorithmdatabase
Image enhancement algorithmselector
NR IQA
Figure 3 ampe framework of the NR IQA-based underwater smart display module
Scientific Programming 5
where dapp(p q) is the color similarity between blocks p andq which is calculated by the Euclidean distance between theaverage colors of blocks p and q dspa(p q) is the Euclideandistance between blocks p and qwspa(p q) is obtained aftermapping according to (3) among which σ2spa 025 Fi-nally we use the method of maximum variance betweenclasses to get the final ROI
32 Color Richness With the aggravation of the phenom-enon of color attenuation the color of the natural under-water image will become less and less and the visibility of theobject will become worse amperefore the color richness ofROI is a simple and fast metric to measure whether the colordistortion of natural underwater image is serious which issuitable for the evaluation of image quality
In this paper the richness of color is measured by thespatial characteristics of color in CIE XYZ color space ampecolor richness should not only include color diversity butalso consider the lightness distribution soXYZ color space isa good choice CIE XYZ color space can represent all colorsand the Y parameter is the measurement of color lightnessAccording to the XYZ color space distribution of the twoimages shown in Figure 5 the wide distribution of imagecolor respectively in the three dimensions of X Y and Zdoes not mean that the color richness is good ampat is be-cause the three components of X Y and Z have a certaincorrelation So the spatial characteristics of color can betterrepresent the distribution of color According to (4) theimage color divergence in XYZ color space is defined todetermine the color richness of the image
Cd 1113944mn
dis Pmn min Pmn max( 1113857 times max dis Pmn(i j) Pmn min Pmn max1113858 1113859( 1113857 times12 (4)
where dis represents the shortest distance between twopoints or between points and lines and mn belongs to X-YY-Z and X-Z sectionsPmn min and Pmn max represent theclosest and farthest points from the origin respectively
33 Color Fidelity As mentioned in Section 1 the enhancedimage may be oversaturatedpseudobright (as shown inFigure 2) If too much attention is paid to the color richnessthe color of ROI in the image will deviate from the color ofreal objectsamperefore we should not only consider the colorrichness of the enhanced underwater image but also con-sider the color fidelity of ROI that is whether the intensityof pixels is within a reasonable range
It is necessary to understand the formation and deg-radation of underwater images if we want to estimate areasonable range of intensity of pixels ampe formation of theunderwater image is dominated by the following factors[1 20 21]
Ic JCeminus β(DC)timesz
+ Binfinc 1 minus e
minus β(BC)timesz1113872 1113873 (5)
where C R G B is the color channel Ic is the underwaterimage captured by the camera and JCeminus β(DC)timesz is the directsignal recorded as Dc Binfinc (1 minus eminus β(BC)timesz) is a backscatteredsignal which is recorded as Bc z is the distance between thecamera and the photographed object Binfinc is the obscuredlight JC is the unattenuated scene that is the RGB intensityof the surface captured by the sensor with the spectral re-sponse Sc(λ) at the distance z0 0 (generally z0 is regardedas 0)
JC 1kc
1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)dλ (6)
kc is the camerarsquos scaling constant βDc and βB
c have acertain dependence on the spectrum of distance z reflec-tivity ρ(λ) ambient light E(d λ) camerarsquos spectral responseSc(λ) scattering coefficient b(λ) and beam attenuationcoefficient β(λ) as shown in (7) and (8) z0 and (z0 + z) arethe starting and ending points along the line of sight
βDc
ln 1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)z0dλ1113946
λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)(z0+z)dλ1113888 11138891113890 1113891
z
(7)
βBc minus
ln 1113946λ2
λ1Sc(λ)B
infin(λ)e
minus β(λ)zdλ1113946
λ2
λ1Sc(λ)B
infin(λ)dλ1113888 11138891113890 1113891
z
(8)
So we can calculate the unattenuated scene JC as
JC Dceβ(DC)timesz
Ic minus Binfinc 1 minus e
minus β(BC)timesz1113872 11138731113960 1113961 times e
β(DC)timesz
(9)
We need to estimate the reasonable range of valuesrange [Jcminprime Jcmaxprime ] of RGB intensity of each pixel in theforeground (that is ROI) ampe color fidelity metric definedby (10) is calculated by the out-of-range part of the
6 Scientific Programming
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
where dapp(p q) is the color similarity between blocks p andq which is calculated by the Euclidean distance between theaverage colors of blocks p and q dspa(p q) is the Euclideandistance between blocks p and qwspa(p q) is obtained aftermapping according to (3) among which σ2spa 025 Fi-nally we use the method of maximum variance betweenclasses to get the final ROI
32 Color Richness With the aggravation of the phenom-enon of color attenuation the color of the natural under-water image will become less and less and the visibility of theobject will become worse amperefore the color richness ofROI is a simple and fast metric to measure whether the colordistortion of natural underwater image is serious which issuitable for the evaluation of image quality
In this paper the richness of color is measured by thespatial characteristics of color in CIE XYZ color space ampecolor richness should not only include color diversity butalso consider the lightness distribution soXYZ color space isa good choice CIE XYZ color space can represent all colorsand the Y parameter is the measurement of color lightnessAccording to the XYZ color space distribution of the twoimages shown in Figure 5 the wide distribution of imagecolor respectively in the three dimensions of X Y and Zdoes not mean that the color richness is good ampat is be-cause the three components of X Y and Z have a certaincorrelation So the spatial characteristics of color can betterrepresent the distribution of color According to (4) theimage color divergence in XYZ color space is defined todetermine the color richness of the image
Cd 1113944mn
dis Pmn min Pmn max( 1113857 times max dis Pmn(i j) Pmn min Pmn max1113858 1113859( 1113857 times12 (4)
where dis represents the shortest distance between twopoints or between points and lines and mn belongs to X-YY-Z and X-Z sectionsPmn min and Pmn max represent theclosest and farthest points from the origin respectively
33 Color Fidelity As mentioned in Section 1 the enhancedimage may be oversaturatedpseudobright (as shown inFigure 2) If too much attention is paid to the color richnessthe color of ROI in the image will deviate from the color ofreal objectsamperefore we should not only consider the colorrichness of the enhanced underwater image but also con-sider the color fidelity of ROI that is whether the intensityof pixels is within a reasonable range
It is necessary to understand the formation and deg-radation of underwater images if we want to estimate areasonable range of intensity of pixels ampe formation of theunderwater image is dominated by the following factors[1 20 21]
Ic JCeminus β(DC)timesz
+ Binfinc 1 minus e
minus β(BC)timesz1113872 1113873 (5)
where C R G B is the color channel Ic is the underwaterimage captured by the camera and JCeminus β(DC)timesz is the directsignal recorded as Dc Binfinc (1 minus eminus β(BC)timesz) is a backscatteredsignal which is recorded as Bc z is the distance between thecamera and the photographed object Binfinc is the obscuredlight JC is the unattenuated scene that is the RGB intensityof the surface captured by the sensor with the spectral re-sponse Sc(λ) at the distance z0 0 (generally z0 is regardedas 0)
JC 1kc
1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)dλ (6)
kc is the camerarsquos scaling constant βDc and βB
c have acertain dependence on the spectrum of distance z reflec-tivity ρ(λ) ambient light E(d λ) camerarsquos spectral responseSc(λ) scattering coefficient b(λ) and beam attenuationcoefficient β(λ) as shown in (7) and (8) z0 and (z0 + z) arethe starting and ending points along the line of sight
βDc
ln 1113946λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)z0dλ1113946
λ2
λ1Sc(λ)ρ(λ)E(d λ)e
minus β(λ)(z0+z)dλ1113888 11138891113890 1113891
z
(7)
βBc minus
ln 1113946λ2
λ1Sc(λ)B
infin(λ)e
minus β(λ)zdλ1113946
λ2
λ1Sc(λ)B
infin(λ)dλ1113888 11138891113890 1113891
z
(8)
So we can calculate the unattenuated scene JC as
JC Dceβ(DC)timesz
Ic minus Binfinc 1 minus e
minus β(BC)timesz1113872 11138731113960 1113961 times e
β(DC)timesz
(9)
We need to estimate the reasonable range of valuesrange [Jcminprime Jcmaxprime ] of RGB intensity of each pixel in theforeground (that is ROI) ampe color fidelity metric definedby (10) is calculated by the out-of-range part of the
6 Scientific Programming
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
enhanced underwater image 1113954Jc ampe process is shown inFigure 6
Firstly the veiling light Binfinc of the underwater image Ic isestimated Backscatter increases exponentially with z andeventually is saturated [1] In other words at infinityIc Binfinc Referring to [22] we assume an area withoutobjects is visible in the image in which the pixelsrsquo color isdetermined by the veiling light alone Such areas are smoothand have no texture ampis assumption often holds in theapplication scenarios of our IQA First the edge graph of theimage is generated ampen the threshold value is set and thelargest connected pixel area is found ampe veiling light is theaverage color of the pixels in these areas
Next we estimate the type of water body in the un-derwater image Ic by veiling light Binfinc ampe reason for es-timating the type of water is that the common notion that
water attenuates red colors faster than bluegreen only holdsfor oceanic water types [1] We simplified Jerlov water types[23] into blue water (Jerlov IndashIII) green water (Jerlov 1cndash3c)and yellow water (Jerlov 5cndash9c) and simulated the averageRGB value of perfect white surface appearance under thesethree water types (data from [23] using D65 light sourceCanon camera 5DMark2 ρ(λ) 1) We calculate Euclideandistance between the veiling light Binfinc and the average RGBvalue and estimate water body type based on Euclideandistance
ampen we calculate the reasonable intensity range[Jcminprime Jcmaxprime ] of each pixel after the enhancement of theunderwater image βD
c varies most strongly with range z [1]So the most important thing to calculate the range is toestimate the distance z in addition to the water body typeDue to the limitation of real conditions the distance z of the
(a) (b)
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
X
1
008
006
004
002
0
Y02
015
01
005
0
Z
0 05 10 05 1
(c)
01
005
00 05
X
1
1
05
CIE XYZ
01
050
0 05X
Y
Z
1
01
005
00 05
Z
1
01
005
00 05
Y
1
(d)
Figure 5 Color distribution in CIE XYZ space
Scientific Programming 7
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
object in the image cannot be obtained so it is necessary toroughly estimate the possible range of the distance z For theforeground the distance z from the camera is approximatelythe same for the background the distance z from the cameratends to be infinite We assume that the distance z from thecamera is the same at each part of the foreground and theremay be white objects amperefore the distance z which makesthe JC of the foreground pixels under the three RGBchannels not greater than 255 and not less than 0 is con-sidered as the possible distance In order to simplify thecalculation the attenuation coefficient βD
c of white in threecolor channels C R G B is adopted for all colors (usingρ(λ) in Macbeth ColorChecker)
Finally the color fidelity defined by (10) is calculated
C f 1 minusSumoor255( 1113857
Numoor times 31113890 1113891
2
(10)
Numoor represents the number of pixels in ROI blockand Sumoor represents the total number of pixel intensitydeviations that are not within a reasonable range
We make some qualitative analysis on the influence ofsimplification on Jcminprime and Jcmaxprime during the calculation Asshown in Figure 7(b) (8) is used to calculate the broadband(RGB) attenuation coefficient βD
c (using ρ(λ) of the colorblock in Macbeth ColorChecker depth d 1m distancez 1m) of seven common colors of red orange yellowgreen green blue and purple (Figure 7(a)) under all Jerlovwater types It can be seen that the βD
c difference of eachcolor is not large in the same scene Figure 7(c) shows theinfluence of different camera types on βD
c in three types ofwater bodies ampe influence of camera parameters on theattenuation coefficient βD
c is not significant ampe experi-mental results in [1] also prove this view
34 NIPQ Metric Section 31 Section 32 and Section 33above respectively introduce the ROI extraction methodcolor richness in statistical sense and color fidelity in pixelsense In this paper the color richness of ROI and color
fidelity of ROI are combined by the multiplication model toget our NIPQ ampe common underwater image evaluationmodels UIQM [11] UCIQE [12] and CCF [13] with mul-tiparameters use linear weighting to measure the compre-hensive quality of the image We consider that if a submetricpoints to a very low value (indicating low quality) thesubjective feeling of the whole image will be very poor re-gardless of other metrics amperefore this paper uses themultiplication model to generate the overall underwaterimage quality assessment as follows
CR Cd(ROI)1113868111386811138681113868
1113868111386811138681113868 times Cf(ROI) (11)
represents normalization CR represents color quality ofROI block CRϵ(0 1) and the larger the value is the higherthe image quality is
ampe overall process of NIPQ is shown in Figure 8 whichis divided into four step Firstly the ROI of the originalimage (not enhanced) is extracted based on backgroundforegroundampen the color richness of ROI of the enhancedunderwater image is estimated ampen the ocean backgroundinformation is extracted from the original image fromwhich the water body type is estimated and the reasonablerange of pixel intensity distribution is estimated Accordingto the estimated range the ROI color fidelity of the enhancedunderwater image is estimated Finally the two metrics ofcolor richness and color fidelity are integrated to obtain thecomprehensive NIPQ metric for the whole underwaterimage
4 UOQ Database
In order to better evaluate the performance of NIPQ metricwe built an underwater optical image quality database UOQ
Image Selection In order to fully consider various un-derwater scenes we selected 36 typical underwater opticalimages with a size of 512 times 512 ampese images include bluewater green water yellow water dark light single objectmultiobject simple texture and complex texture serious
Input image 1
Input enhancement
(1) veiling-light estimation (2) determine the type of water body
Blue
Green
Yellow
(3) estimate the reasonable range of pixel intensity of ROIafter underwater image enhancement processing
(4) calculate deviationbeyond reasonable range
Color fidelity
Input image 2
Figure 6 ampe estimation process of color fidelity
8 Scientific Programming
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
color distortion and a little color distortion Consideringthat there is no general ROI related dataset in the field ofunderwater image we label their foreground region (ROI)pixel by pixel to prove the reliability of ROI in this paperAnd we use five popular image enhancement algorithms(white balance algorithms [24] Fursquos algorithm [25]multifusion algorithm [26] histogram equalization [27]and Retinex [28]) to process these 36 natural images 180enhanced images were obtained Some images and theirenhanced images processed by the white balance algorithm[24] are shown in Figure 9
Evaluation Methods and Evaluation Protocols In this da-tabase the single incentive evaluation method is usedVolunteers only watch one image to be evaluated each timeand each image only appears once in a round of evaluationAfter each image was displayed volunteers gave subjectivequality scores to the corresponding images Underwateroptical images usually have practical applications so vol-unteers will not be affected by any aesthetic factors in theprocess of subjective quality assessment and the evaluationprotocols are shown in Table 1
Choosing Volunteers In order to avoid the evaluation biascaused by prior knowledge none of the volunteers had theexperience of image quality assessment We consider thestrong application background of underwater images so allvolunteers selected are graduate students with relevant workexperience in underwater acoustic communication under-water detection and so on
All the obtained subjective scores are used to calculatethe mean opinion scores (MOS) Note Sij as the subjectivescore of the image j by the i-th volunteer and Nj as thenumber of subjective scores obtained by imagej MOS iscalculated as follows
MOSj I
Nj
1113944i
Sij (12)
We draw a histogram about MOS of all images in thedatabase as shown in Figure 10 It can be seen that our imagecovers a wide range of quality which is conducive to thedesign of IQA And there are many images with scores in themiddle score segment because the volunteer will try to avoidgiving extreme scores when scoring images It also can be
ROI
Image informationof ROI
Color fidelity
Color richness
Q
Extract relevant information
Iamge enhancement algorithm
Input image 1
Input image 2
Figure 8 Overall process of NIPQ
Red
Orange
Yellow
Green
Cyan
Blue
Purple
(a)
5
50
4
βD B 3
2
1
4β DG
3 21 0
βDR0 1 2 3 4 5
Jerlov IJerlov IAJerlov IBJerlov II
Jerlov IIIJerlov 1CJerlov 3C
Jerlov 5CJerlov 7CJerlov 9C
(b)
5
50
4
4
3
3
2
2
1
10 0 1 2 3 4 5
βD B
β DG
βDR
(c)
Figure 7 Qualitative analysis on the influence of simplification on βDc (a) Seven common colors (b) βD
c of different colors in different waterbodies (c) βD
c of different colors under different cameras We simplify the types of water into three It can be seen from (b) that thecalculation error of yellow water caused by simplification is larger than that of blue and green water body caused by simplification ampeyellow water body in the underwater image is not common so the simplification of water body type is applicable to most occasions
Scientific Programming 9
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
seen that the lower quality image is slightly more than thehigher quality underwater image ampis is because mostunderwater images have the characteristics of blue-greenand poor contrast and sometimes the quality of the en-hanced image is still not ideal In the practical applicationsmore robust enhancement algorithms will be built into theunderwater image enhancement algorithm database of thedisplay module mentioned in Section 2
5 Experiment
In combination with the UOQ database we mainly evaluatethe performance of IQA through five criteria ampe predictionmonotonicity of IQA is measured by the Spearman rankorder correlation coefficient (SROCC) and Kendallrsquos rankorder correlation coefficient (KROCC) ampe prediction ac-curacy of IQA is measured by the Pearson linear correlation
Table 1 Evaluation protocols
Score Comprehensive feelings5 ampe subjective feeling is excellent foreground information is recognizable and no color distortion is felt
4 ampe subjective feeling is good the foreground information is visible and recognizable there is a small amount of perceptualdistortion but it does not affect the extraction of important information
3 ampe subjective feeling is general part of the information in the foreground is damaged and a small amount of importantinformation is lost due to distortion
2 ampe subjective perception is poor and only the general outline of the foreground content can be distinguished the distortion leadsto the loss of some important information
1 ampe subjective feeling is very poor it is difficult to recognize the foreground content and it is almost impossible to extract anyeffective information from the image
454035302520N
um
1510
05
1 15 2 25 3MOS
35 4 45 5
(a)
200
150
100
50
0
Imag
e
1 150 05 2 25 3MOS
35 4 45 5
(b)
Figure 10 (a) Frequency histogram about MOS and (b) MOS of all images
(a) (b) (c) (d) (e) (f )
(g) (h) (i) (j) (k) (l)
Figure 9 Underwater image processed by white balance algorithm [24] (a)ndash(f) are the original imagesampeir MOS are 240 170 300 130255 and 405 respectively (g)ndash(l) are enhanced images ampeir MOS are 105 115 205 280 455 and 115 respectively
10 Scientific Programming
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
coefficient (PLCC) Root mean square error (RMSE) is usedto measure the prediction consistency of IQA ampe meanabsolute error (MAE) is also used to evaluate the perfor-mance of IQAampe high values (close to 1) of SROCC PLCCand KROCC and the low values (close to 0) of RMSE andMAE indicate that IQA has a better correlation with sub-jective scores
ampe selected IQA metrics for performance comparisoninclude the following
(1) ampe popular no-reference metrics underwaterUIQM [11] UCIQE [12] and CCF [13]
(2) ampe popular no-reference metrics in the air BRIS-QUE [6] and LPSI [7]
(3) Common color metrics for underwater imagesUICM [11] and variance of chromaticity (Var Chr)[29]
For the BRISQUE a low score means high quality andother metrics are that the higher the score the better thequality
51 Effect Analysis of Introducing ROI into IQA In order toobserve the influence of the introduction of ROI on thequality evaluation of underwater images we need tocombine ROI with the popular underwater no-referenceIQA ampe block strategy mentioned in Section 31 is nec-essary because it helps us combine ROI with IQA betterAccording to the block fusion strategy represented by (13)we combine image block with IQA and get comprehensivequality score We can observe the change of correlationbetween objective metrics and MOS before and aftercombining with ROI
ROIQ
1113944mtimesn
WR(i) times Q(i)
1113944mtimesn
WR(i) (13)
WR(i) represents the weight of the i-th image block andQ(i) represents the objective quality score under the metricWR(i) belongs to 0 or 1 ampat is to say the difference be-tween before and after IQA combined with ROI is that theoriginal metric calculates the quality of the whole imagewhile the metric combined with ROI only calculates theimage quality of ROI ampe results are shown in the first sixlines of Table 2 ampe results show that the correlation be-tween the metric combined with ROI and MOS is higherthan the original metric ampis shows that the combination ofROI and IQA is helpful for IQA
52 Performance Analysis of Proposed NIPQ We calculatedthe correlation between various metrics and MOS in thedatabase and the results are shown in Table 2 It can be seenthat the correlation between NIPQ metric and the subjectiveis significantly higher than other metrics
In order to compare various NR IQAs intuitively thescatter diagram between MOS and the estimated objectivescore is drawn including six selected NR IQA and the NIPQ
proposed in this paper as shown in Figure 11 On this basisthe experimental data were regressed by the least squaremethod and the straight line is also drawn ampe better thefitting effect of scatter point is the better the correlationbetween the metric and MOS is ampe regression line showsthat the correlation between NIPQ and MOS is obviouslybetter than other metrics It validates the results of Table 2 Itcan be seen that LPSI and BRISQUE are themetrics designedfor images in the air which are not applicable to underwaterimages As a whole UIQM UCIQE and CCF are speciallydesigned for underwater images and their performance isbetter than that for images in the air Performance of UICMas a submetric indicating chromaticity in UIQM is slightlyworse than that of UIQM Compared with the scatter plots ofother NR IQAmetrics it can be seen that the performance ofour NIPQ shows the best correlation with MOS Althoughthere are still some aberrant data points generally speakingthe proposed NIPQ has better robustness to a variety oftypical representative underwater images contained in thedatabase Further analysis shows that some of these aberrantpoints are caused by the fact that the submetric C f of theoriginal image (without enhancement) is directly taken as 1in our experiment
As shown in Figures 12 and 13 there are two naturalunderwater images and their enhanced images in the UOQdatabase Table 3 shows the corresponding MOS and ob-jective scores of these images Figure 14 shows the colordistribution of their ROI From these images the ROI of theoriginal image of (1) is dark and that of (2) is blueampe imageenhanced by the histogram algorithm is reddish and thecolor distribution of ROI is wider but the color of ROI isobviously oversaturatedpseudobright ampere is no signifi-cant difference between the image processed by the Retinexalgorithm and the original image ampe color of the imageprocessed by Fursquos algorithm is not vibrant For Figure 12 theoverall difference between the white balance and the mul-tifusion algorithm is small ampe local graph (Figure 15)shows that the brightness distribution of the image pro-cessed by the multifusion algorithm is uneven slightlyoversaturated and the image enhanced by the white balancealgorithm has a better visual effect For Figure 13 the imageprocessed by the white balance algorithm is too dark and has
Table 2 Correlation between MOS and quality scores of objectiveevaluation metric before and after integration with ROI
PLCC SROCC KROCC MAE RMSEUIQM minus0173 minus0199 minus0132 0751 0903ROI_UIQM 0277 0280 0196 0739 0897UCIQE 0294 0207 0145 0707 0868ROI_UCIQE 0374 0274 0192 0683 0840CCF 0069 0075 0050 0791 0946ROI_CCF 0393 0358 0254 0722 0872Var_Chr 0158 0180 0125 0674 0841UICM minus0283 minus0338 minus0225 0714 0854BRISQUE minus0309 minus0265 minus0185 0747 0902LPSI 0323 0245 0169 0734 0898C d 0481 0465 0335 0635 0789C f 0478 0432 0303 0658 0806Proposed 0641 0623 0452 0576 0713
Scientific Programming 11
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
0 02 04 06LPSI
08 1
545
435
325
215
1
MO
S
(a)
0 20 40 60Brisque
80 100 120 140
545
435
325
215
1
MO
S
(b)
ndash200 ndash150 ndash100 ndash50UICM
0 50
545
435
325
215
1
MO
S
(c)
ndash2 ndash1 0 1 2 3UIQM
4 5 6
545
435
325
215
1
MO
S
(d)
02 03 04 05UCIQE
06 07 08
545
435
325
215
1
MO
S
(e)
0 10 20 30CCF
40 50 60 70
545
435
325
215
1
MO
S
(f )
01 02 03 04 05 06Cf
07 08 09 1
545
435
325
215
1
MO
S
(g)
0 02 04Cd
06 108
545
435
325
215
1
MO
S
(h)
0 01 02 03Proposed
04 05 06 07 08 09
545
435
325
215
1
MO
S
(i)
Figure 11 Scatter diagram betweenMOS and estimated objective score (a) LPSI (b) BRISQUE (c) UICM (d) UIQM (e) UCIQE (f ) CCF(g) C f (h) C d and (i) proposed
(a) (b) (c) (d) (e) (f )
Figure 12 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
(a) (b) (c) (d) (e) (f )
Figure 13 (a) Ori (b) multifusion (c) Fu (d) white balance (e) histogram equalization and (f) Retinex
12 Scientific Programming
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
Table 3 ampe corresponding MOS and objective scores of Figures 12 and 13
Ori Multifusion Fu White balance Histogram equalization Retinex
Figure 12
MOS 2550 4500 4100 4550 3050 2500CCF 13265 22292 23887 16437 30794 13379
ROICCF 20821 33389 31274 29108 26409 21604UCIQE 0554 0664 0591 0652 0684 0569
ROIUCIQE 0560 0647 0573 0627 0580 0575UIQM 3983 4543 4850 3969 4780 4085
ROIUIQM 5585 5589 5495 5672 5055 5620BRISQUE 16303 26934 31708 17824 36762 16744
LPSI 0926 0901 0910 0923 0912 0926C d 0243 0715 0578 0698 0846 0324C f 1000 0802 0637 0827 0464 0994
Proposed 0243 0574 0368 0577 0392 0322
Figure 13
MOS 3200 3800 1550 2150 2700 3250CCF 31443 31465 37069 18688 36928 29029
ROICCF 22582 35995 32468 13265 38366 23097UCIQE 0519 0628 0623 0476 0693 0541
ROIUCIQE 0541 0620 0588 0447 0676 0564UIQM 1504 3337 4325 3840 4100 2182
ROIUIQM 6658 5235 5249 5349 4789 5160Brisque 4330 14749 17319 4153 20596 4441LPSI 0923 0887 0911 0904 0906 0926C d 0475 0730 0317 0029 0640 0401C f 1000 0847 0632 0581 0601 0972
Proposed 0475 0619 0200 0017 0384 0390
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
1CIE XYZ
08060402
00
005YX
Z
105
1
Figure 14 ROI color distribution of Figures 12 and 13
(a) (b)
Figure 15 Local graph of Figures 12(b) and 12(d)
Scientific Programming 13
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
a single color ampe image processed by the multifusion al-gorithm has a better visual effect
Tables 3 shows that the selected IQAs do not perform wellin the quality assessment of images in the UOQ databaseampey generally have higher objective scores for images en-hanced by the histogram equalization algorithm because thecolor distribution of the images is wider ampis is a disad-vantage of quality evaluation based on statistics color fidelityis not taken into account It can be seen that if the perfor-mance of the original metric is not ideal the metric combinedwith ROI will not necessarily improve this situation becausethis is the limitation of the original metric itself
6 Conclusion
Because of the characteristics of water medium color hasbecome one of the important concerns in underwater imagequality assessment Color contains important informationSevere color selective attenuationpseudo-vividness canmake it difficult to identify foreground content and extractkey and effective information from images In this paper anew underwater image evaluation metric NIPQ is proposedbased on the underwater environment characteristics andHVS ampe NIPQ is designed in a three-stage framework ampefirst stage focuses on the attention mechanism of HVS ampesecond stage considers the influence of color richness in astatistical sense ampe third stage is inspired by underwaterimage formation models and considers color fidelity from apixel perspective Finally in the fourth phase color richnessand color fidelity are systematically integrated for real-timequality monitoring At the same time the relevant under-water image database UOQ with MOS is built to measureIQA performance Experimental results show that com-pared with other commonly used underwater metrics NIPQin this paper has better correlation with MOS which showsbetter performance
Data Availability
ampe data used to support the findings of this study areavailable from the corresponding author upon request
Conflicts of Interest
ampe authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
ampis work was supported by the National Natural ScienceFoundation of China (61571377 61771412 and 61871336)and the Fundamental Research Funds for the CentralUniversities (20720180068)
References
[1] D Akkaynak and T Treibitz ldquoA revised underwater imageformation modelrdquo in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition pp 6723ndash6732 SaltLake City UT USA March 2018
[2] S Bazeille I Quidu L Jaulin and J-P Malkasse ldquoAutomaticUnderwater image Pre-processingrdquo CMMrsquo06 Brest France2006
[3] I Avcibas B Sankur and K Sayood ldquoStatistical evaluation ofimage quality measuresrdquo Journal of Electronic Imagingvol 11 no 2 pp 206ndash223 2002
[4] D-Y Tsai Y Lee and E Matsuyama ldquoInformation entropymeasure for evaluation of image qualityrdquo Journal of DigitalImaging vol 21 no 3 pp 338ndash347 2008
[5] Y Y Fu ldquoColor image Quality Measures and Retrievalrdquo NewJersey Institute of Technology Newark NJ USA 2006
[6] A Mittal A K Moorthy and A C Bovik ldquoNo-referenceimage quality assessment in the spatial domainrdquo IEEETransactions on Image Processing vol 21 no 12 pp 4695ndash4708 2012
[7] Q Wu Z Wang and H Li ldquoA highly effificient method forblind image quality assessmentrdquo in Proceedings of the 2015IEEE International Conference on Image Processing (ICIP)pp 339ndash343 IEEE Quebec City Canada September 2015
[8] J Kim A-D Nguyen and S Lee ldquoDeep cnn-based blindimage quality predictorrdquo IEEE Transactions on Neural Net-works and Learning Systems vol 30 no 1 pp 11ndash24 2018
[9] S Bosse D Maniry K-R Muller T Wiegand andW SamekldquoDeep neural networks for no-reference and full-referenceimage quality assessmentrdquo IEEE Transactions on ImageProcessing vol 27 no 1 pp 206ndash219 2017
[10] X Liu J Van De Weijer and A D Bagdanov ldquoRankiqalearning from rankings for no-reference image quality as-sessmentrdquo in Proceedings of the IEEE International Conferenceon Computer Vision pp 1040ndash1049 Venice Italy October2017
[11] K Panetta C Gao and S Agaian ldquoHuman-visual-system-inspired underwater image quality measuresrdquo IEEE Journal ofOceanic Engineering vol 41 no 3 pp 541ndash551 2016
[12] M Yang and A Sowmya ldquoAn underwater color image qualityevaluation metricrdquo IEEE Transactions on Image Processingvol 24 no 12 pp 6062ndash6071 2015
[13] YWang N Li Z Li et al ldquoAn imaging-inspired no-referenceunderwater color image quality assessment metricrdquo Com-puters amp Electrical Engineering vol 70 pp 904ndash913 2018
[14] S Kastner and L G Ungerleider ldquoMechanisms of visualattention in the human cortexrdquo Annual Review of Neuro-science vol 23 no 1 pp 315ndash341 2000
[15] L Zhang J Chen and B Qiu ldquoRegion-of-interest codingbased on saliency detection and directional wavelet for remotesensing imagesrdquo IEEE Geoscience and Remote Sensing Lettersvol 14 no 1 pp 23ndash27 2016
[16] C Zhu K Huang and G Li ldquoAn innovative saliency guidedroi selection model for panoramic images compressionrdquo inProceedings of the 2018 Data Compression Conference p 436IEEE Snowbird UT USA March 2018
[17] Z Cui J Wu H Yu Y Zhou and L Liang ldquoUnderwaterimage saliency detection based on improved histogramequalizationrdquo in Proceedings of the International Conference ofPioneering Computer Scientists Engineers and EducatorsEngineers and Educators pp 157ndash165 Springer Singapore2019
[18] L Xiu H Jing S Min and Z Yang ldquoSaliency segmentationand foreground extraction of underwater image based onlocalizationrdquo in Proceedings of the OCEANS 2016 ShanghaiChina 2016
[19] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in Proceedings of the IEEE
14 Scientific Programming
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15
Conference on Computer Vision and Pattern Recognitionpp 2814ndash2821 Columbus OH USA June 2014
[20] D Akkaynak and T Treibitz ldquoSea-thru a method for re-moving water from underwater imagesrdquo in Proceedings of theIEEE Conference on Computer Vision and Pattern Recognitionpp 1682ndash1691 Long Beach CA USA April 2019
[21] D Akkaynak T Treibitz T Shlesinger Y Loya R Tamir andD Iluz ldquoWhat is the space of attenuation coeffificients inunderwater computer visionrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognitionpp 4931ndash4940 Honolulu HI USA 2017
[22] D Berman T Treibitz and S Avidan ldquoDiving into haze-linescolor restoration of underwater imagesrdquo in Proceedings of theBritish Machine Vision Conference (BMVC) vol 1 LondonUK September 2017
[23] M G Solonenko and C D Mobley ldquoInherent opticalproperties of jerlov water typesrdquo Applied Optics vol 54no 17 pp 5392ndash5401 2015
[24] E Y Lam ldquoCombining gray world and retinex theory forautomatic white balance in digital photographyrdquo in Pro-ceedings of the Ninth International Symposium on ConsumerElectronics 2005 IEEE Melbourne Australia pp 134ndash139July 2005
[25] X Fu P Zhuang Y Huang Y Liao X-P Zhang andX Ding ldquoA retinex-based enhancing approach for singleunderwater imagerdquo in Proceedings of the 2014 IEEE Inter-national Conference on Image Processing (ICIP) pp 4572ndash4576 IEEE Paris France October 2014
[26] C Ancuti C O Ancuti T Haber and P Bekaert ldquoEnhancingunderwater images and videos by fusionrdquo in Proceedings of the2012 IEEE Conference on Computer Vision and Pattern Rec-ognition IEEE Providence RI USA pp 81ndash88 June 2012
[27] B Zhang ldquoImage enhancement based on equal area dualisticsub-image histogram equalization methodrdquo IEEE Transac-tions on Consumer Electronics vol 45 no 1 75 pages
[28] E H Land ldquoampe retinex theory of color visionrdquo ScientificAmerican vol 237 no 6 pp 108ndash128 1977
[29] D Hasler and S E Suesstrunk ldquoMeasuring colorfulness innatural imagesrdquo in Human Vision and Electronic ImagingVIII vol 5007 pp 87ndash95 International Society for Optics andPhotonics Bellingham WA USA 2003
Scientific Programming 15