Open Access

A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging

Human-centric Computing and Information Sciences20177:35

Received: 18 February 2017

Accepted: 14 August 2017

Published: 16 October 2017


An infrared transmitting model from the observed finger vein images is proposed in this paper by using multi-light-intensity imaging. This model is estimated from many pixels’ values under different intensity light in the same scene. Due to the fusion method could be applied in biometric system, the vein images of finger captured in the system, we proposed in this paper, will be normalized and preserved the intact of the vein patterns of the biometric data from tested human’s body. From observed pixels under multi-light-intensity, the curve of the transmitting model is recovered by sliding both of the sampled curve segments and using curve-fitting. The fusion method with each pixel level weighting based on the proposed transmitting model curve is adopted by the smooth spatial and estimation of the block quality. Finally, the results shown that our approach is a convenient and practicable approach for the infrared image fusion and subsequent processing for biometric applications.


Vein imageMulti-light-intensityTransmitting modelImage fusion


The finger vein authentication is highly accurate and convenient by using the individual’s unique biological characteristics. Vascular patterns are unique to each individual—even identical twins have different patterns. Finger vein authentication works based on the vein patterns in the superficial subcutaneous finger skin that are unique [13]. Three main advantages of vein authentication are following: (1) Because the finger veins are hidden inside human’s body, some little risks of forgery or theft appear in daily activities. The conditions on surface of the skin in finger, e.g. dry or wet, will have no effect on its authentication. (2) It is non-invasive and contactless in the finger vein imaging, which is convenient and cleanliness for the users. (3) The stability and complexity of finger vein patterns will be better than other biometric features on human’s body, which have the higher security level for personal identification [4].

The physiological information extracted from human body including the features of individual face, palm-print or fingerprint, hand-shape, skin, temperature and arterial pulse, etc. is used to recognize personal identification and diagnose some diseases. The information mentioned above, plus subcutaneous superficial vein pattern, could be extracted and digitized as biometric data. It could be further represented as a typical pattern in order to identify individual identification [59]. It is convenience to use the identified biometric to be the access right. The relative applications always focus in the remote access control in the websites, e.g. the website of finance or bank, etc. However, the image data of biometric is sensitive to the physiological conditions and environments. For example, the captured feature in human’s face, where the factors of its illumination distribution and direction should be modified or normalized before storing it. It may exists lots of shadow images or noises in this captured image. Finally, Its features will then be strongly influenced by the shadow images or noises [10]. On the other hand, the non-uniformed illumination will increase the interference and redundant information, or submerge some patterns. It will lead to the deformation of dimensionality. It is very important to normalize the captured biometric information before keeping them to the storage of biometric system [11, 12]. The similar problems mentioned above are also appeared in the finger vein image capturing processes [1318]. The width of vein in the captured image will be changed under different intensity near-infrared light. Because thickness of each finger is different, the under/over-exposure may appears in the thick/thin area of the finger by using one fixed-intensity-light. It will be inundated by this vein pattern. The vein pattern integrity is very important for the biometric system. Thus, it is necessary to normalize the illumination in the vein image capture before storing them in the biometric information storages or databases.

The main work of finger vein authentication is to collect the data: finger vein images. The quality of the image will affect directly the accuracy and its recognition speed. This paper presents the details in analyses of infrared finger vein images. In addition, the transmitting model is built from the observed data, e.g. multi-light-intensity vein images. Finally, the pixel level fusion method based on the transmitting model as well as spatial smoothing is proposed in this paper.

The remainders of this paper are organized as follows: in “The infrared light transmission model of the finger” section, we introduce the infrared light transmission model of the finger. In “Multi-light-intensity finger vein images’ fusion based on the transmitting model” section, we first formalize a multi-light-intensity finger vein images’ fusion based on the transmitting model. Next, we present examinations and discussions in “Examinations and discussions” section. Finally, we draw our conclusions and further works in “Conclusions and further works” section.

The infrared light transmission model of the finger

This model is extended and modified from Ref. [3]. The steps of basic works from bioinformation to the biometric data in this model are described in “Basic works from bioinformation to the biometric data” section, and its single infrared transmitting model is described in “A single infrared transmitting model” section.

Basic works from bioinformation to the biometric data

The applications of the biometric data includes personal identification and disease diagnosis. The system architecture is illustrated from the bioinformation to the biometric data for a single infrared transmitting model in biometric system shown in Fig. 1. Obviously, the capturing, digitizing and normalizing methods of bioinformation should be efficient in order to record the complete pattern or texture feature information, uniform gray distribution and contrast before their applications. This paper presents a single transmitting model of finger vein imaging in Biometric system and use it for fuse the multi-light-intensity finger vein images to one image, which integrates the vein pattern information of each source image and keeps the complete vein pattern information.
Fig. 1

The system architecture

A single infrared transmitting model

The single infrared transmitting model described in this subsection. It is popular to use near-infrared (NIR) light transmitting the finger to achieve the angiogram imaging. Because the oxyhemoglobin content (HbO) in the venous blood is far beyond the arterial blood and other tissue, such as fat and muscle, the wavelength of the transmitting light absorption should be relatively high. Thus, the 760–1100 nm is suitable for the angiogram imaging from the absorption rate of the water, oxyhemoglobin (HbO) and deoxyhemoglobin (Hb), which is shown in Fig. 2. This higher absorption property of HbO results in that the region of vein pattern is darker than other surrounding region after the NIR light transmitting the finger. This technology is widely used in the vascular vein imaging of breast and cerebral.
Fig. 2

The absorptivity of water, Hb, HbO in the finger’s vein [19]

The tissue optical properties have been modeled based upon photon diffusion theory. The epidermis (the outermost layer of skin) only accounts for 6% of scattering and can be regarded a primary absorptive medium. Therefore, a simplified model on the reflectance of blood and tissue considers the reflectance from only the scattering tissue beneath the epidermis [12]. The skin is assumed to be a semi-infinite homogeneous medium, under a uniform and diffusive illumination. The photon has a relatively long residence time which allows the photon to engage in a random walk within the medium. The photon diffusion depends on the absorption and scattering properties of the skin, which penetration depth for different wavelengths shown in Fig. 3.
Fig. 3

The penetration depth of different wavelengths [12]

Consider all these factors: the tissue (water, Hb and HbO) absorption in the vein, the depth of penetration. The infrared wave band of finger vein imaging is about 850 nm in practice.

Because the thickness of the finger is a nonlinear variable, it is hard to only use invariable light intensity to vein imaging at infrared 850 nm. Thus, overexposure and underexposure often appear in the infrared finger vein images. And these areas with over/under exposure can’t be enhanced, which cause the vein pattern lack in the biometric data extraction. An infrared multi-light-intensity finger vein imaging technology is used in the paper [13] to solve the problem, which extends the dynamic range of the infrared vein imaging [14]. Additionally, it is necessary to fuse the complementary vein information in the next process. This paper presents a calculation method of the infrared finger vein transmitting model based on the multi-light-intensity imaging. The model presents the monotone increasing nonlinear function relationship between the light-intensity and pixel-gray value, which can be built by the genetic algorithm and used in the imaging quality estimation of the pixel-level fusion to infrared multi-light-intensity finger vein images.

The infrared finger vein transmitting model [3] is defined as:
$$B = f(X)$$
\(X\) is the irradiance of the infrared light of transmitting the finger, and \(B\) is represented as a pixel gray value. Generally, the gray level of the pixel is 8 bits. The infrared finger vein transmitting model function [3] is explicitly written as
$$\left\{ { \, \begin{array}{l} {B_{\text{min} } = 0, } \\ {B = f(X),} \\{B_{\text{max} } = 255,} \\ \end{array} \begin{array}{l} {if} \\ {if} \\ {if} \\ \end{array} \begin{array}{l} {X \le X_{\text{min} } } \\ {X_{\text{min} } < X < X_{\text{max} } } \\ {X_{\text{max} } \le X } \\ \end{array} } \right.$$

Assume there are \(N\) vein images captured under increasing light intensity \(X_{p} ,p = 1, \ldots ,N\). The size of each image is \(m \times n\), sign \(K = m \times n\). The qth pixel of the pth light-intensity image will be denoted \(B_{pq}\), the set \(\left\{ {B_{pq} } \right\},p = 1, \ldots ,N{\text{ and }}q \in \{ 1, \ldots ,K\}\), represents the known observations. The goal is to determine the underlying light values or irradiances, denoted by \(X_{q}\), that gave rise to the observations \(B_{pq}\). Because the \(N\) vein images has be properly registered in the pixel level, so that for a particular a, the light value \(X_{a}\) contributes to \(B_{pq} ,p = 1, \ldots ,N{\text{ and }}q \in \{ 1, \ldots ,K\}\). For this work, a normalized cross-correlation function is used as the matching criterion to register images to 1/2-pixel resolution [15].

The model can be rewritten as:
$$B_{pq} = f_{q} (X_{pq} ),\begin{array}{*{20}c} & {p = 1, \ldots ,N, \begin{array}{*{20}c} & {q \in \{ 1, \ldots ,K\}. } \\ \end{array} } \\ \end{array}$$

It means the transmitting model of position q is different. Nevertheless the shape of each model is similar, it gives an easy solution to estimate the transmitting model for each pixel for the application.

Since f is a monotonic and invertible function, its inverse function could be represented as \(g\).
$$X_{pq} = g_{q} (B_{pq} ), \begin{array}{*{20}c} & {p = 1, \ldots ,N, \begin{array}{*{20}c} & {q \in \{ 1, \ldots ,K\}. } \\ \end{array} } \\ \end{array}$$
It is necessary to recover the function \(g\) and the irradiances of \(X_{p} ,p = 1, \ldots ,N\), which satisfy the set of equations arising from Eq. (4) in a least-squared error sense. Recovering function \(g\) only requires recovering the finite number of values that \(g\left( B \right)\) could take since the domain of \(X\), pixel brightness values, is finite. Letting \(B_\text{{min}}\) and \(B_\text{{max}}\) be the least and greatest pixel values (integers), \(q\) be the number of pixel locations and \(N\) be the number of photographs, we formulate the problem as one of finding the \(\begin{array}{*{20}c} {[B_{\text{min} } } & {B_{\text{max} } } \\ \end{array} ]\) values of \(g\left( B \right)\) and the \(q\) values of \(X\) that minimize the following quadratic objective function [3]:
$$\xi = \sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{q} {[g(B_{ij} ) - X_{i} ]^{2} } } + \lambda \sum\limits_{{b = B_{\text{min} } + 1}}^{{b = B_{\text{max} } - 1}} {(g''(b))^{2} }$$
The first term ensures that the solution satisfies the set of equations arising from Eq. (4) in a least squares sense. The second term is a smoothness term on the sum of squared values of the second derivative of \(g\) to ensure that the function \(g\) is smooth; in this discrete setting, the second part can be calculated by the formula (6).
$$g'' = g(b + 1) + g(b - 1) - 2g(b)$$

This smoothness term is essential to the formulation in that it provides coupling between the values \(g\left( z \right)\) in the minimization. The scalar weights the smoothness term relative to the data fitting term, and should be chosen appropriately for the amount of noise expected in the \(B_{ij}\) measurements.

Because it is quadratic in the \(X_{p}\) and \(g\left( z \right)\)’s, minimizing \(\xi\) is a straightforward linear least squares problem. The overdetermined system of linear equations is robustly solved using the singular value decomposition (SVD) method. An intuitive explanation of the procedure may be found in “The infrared light transmission model of the finger” section and Fig. 2 of reference paper [15].

In the reference paper [16], the noise, in the \(X_{p}\), is an independent Gaussian random variable, in which the variance is \(\sigma^{2}\) and the joint probability density function can be written as:
$$P(X_{B} ) \prec \exp \left\{ { - \frac{1}{2}\sum\limits_{{p,q}} {w_{{pq}} (I_{{B_{{pq}} }} - X_{{pq}} )^{2} } } \right\}$$
A maximum-likelihood (ML) approach is taken to find the high dynamic range image values. The maximum likelihood solution finds the values \(X_{q}\) that maximize the probability in Eq. (7). Maximizing Eq. (7) is equivalent to minimizing the negative of its natural logarithm, which leads to the following objective function to be minimized:
$$\xi (X) = \sum\limits_{p,q} {w_{pq} (I_{Bpq} - X_{pq} )^{2} }$$

With Gaussian simplifying approximation, the noise variances \(\sigma_{pq}^{2}\) would be difficult to characterize accurately. Again, detailed knowledge of the image capture process would be required, and the noise characterization would have to be performed each time a different image is captured on a device.

Equation (8) can be minimized by setting the gradient \(\xi \left( X \right)\) equal to zero. But if the \(X_{p}\) were unknown in each pixel, one could jointly estimate \(X_{p}\) and \(X_{q}\) by arbitrarily fixing one of the q positions, and then performing an iterative optimization of Eq. (8) with respect to both \(X_{p}\) and \(X_{q}\). It is difficult to solve these estimating values without the analytic expression of the transmitting model.

From the observed pixels, this paper presents the estimated transmitting model curve by the sliding the sampled curve segments, and blending these to a monotone increasing curve based on the genetic algorithm. So, if the blending curve is built or fit and then the other function curve can be redrawn by several sample points. It is possible to recover the blending curve shown in Fig. 4. The mixed complete curve \(g\) can be used to get the transmitting model function \(f\), which is shown in Fig. 5.
Fig. 4

Sliding and blending the sampled curves into one complete curve [15]. a Three curves from the observed three points under five different irradiation conditions. b Sliding the curve and blending them to one curve

Fig. 5

A transmitting model curve of a finger

Multi-light-intensity finger vein images’ fusion based on the transmitting model

This session presents a fusion algorithm for the multi-light-intensity finger vein images based on the transmitting model. In the image pixel level fusion, the imaging quality estimation of the pixel is very important. In Section II, the transmitting model has been established by the observed data. Its derivative curve is shown in Fig. 6. It is obvious that the value of \(\Delta B\) is about zero in the underexposed and overexposed range. This means that the infrared light intensity in these ranges is not suitable for the finger vein imaging. On the other hand, the value \(\Delta B\) could be used to evaluate the fitness of the irradiance of the infrared intensity.
Fig. 6

The derivatives function curve to Fig. 5

In this paper, the fusion method is based on the pixel level. Firstly, the infrared multi-light-intensity finger vein images are divided into R independent blocks by column.

To sign every divided block as \(T_{rp} ,r = 1,2, \ldots ,R{\text{ and }}p = 1,2, \ldots ,N\), where \(r\) is the index of the block, and \(p\) is the image number. In order to estimate the quality of each \(T_{rp}\), the average gray value of the block is calculated by its quality value as \(\overline{g}_{rp} = mean2(T_{rp} ),r = 1,2, \ldots ,R{\text{ and }}p = 1,2, \ldots ,N\). Then, the \(\overline{g}_{rp}\) is put into the derivative curve of Fig. 5 to calculate the \(\Delta B_{{\overline{g}_{rp} }}\) value in the next fusion. The fusion weight value of the block \(T_{rp}\) is defined [3] as:
$$S_{{rp}} = \exp \left[ {\alpha \cdot\Delta B_{{\bar{g}_{{rp}} }} } \right]$$
The constant parameter \(\alpha\) is the smoothing coefficient. In order to avoid the checkerboard edge between two adjacent blocks, it needs to define other spatial smoothing weighting \(G_{rp}\):
$$G_{rp} (x,y) = \exp\left [ - \frac{{(y - y_{c} )^{2} }}{{2\sigma^{2} }}\right]$$

The constant parameter \(\sigma\) is the variance of the Gaussian coefficient. \(x\) is the row number and \(y\) is the column number in the finger vein image, and \(y_{c}\) is the block center column number.

The weighting is the joint value of the gray information coefficient \(S_{rp}\) and spatial smoothing coefficient \(G_{rp}\). The joint weighting is defined as:
$$\omega_{rp} = G_{rp} *S_{rp}$$
Its normalized value is defined as:
$$\varpi_{rp} = \omega_{rp} \bigg {/}\left( {\sum\limits_{p = 1}^{N} {\omega_{rp} } } \right)$$
In the fusion, each fused block \(I_{r} , \, r = 1,2, \ldots ,R\) is calculated by Eq. (13) [3]:
$$I_{r} = \sum\limits_{p = 1}^{N} {(I_{rp} *\omega_{rp} } ), \begin{array}{*{20}c} & {r = 1,2, \ldots ,R} \\ \end{array}$$

Examinations and discussions

The sample of infrared multi-light-intensity finger vein images is shown in Fig. 7, which is captured by a self-developed platform, shown in Fig. 8. The infrared light intensity is dependent on the duty of PWM, which drives the infrared led irradiance. The transmitting model is shown in Fig. 9 and the differential curve is shown in Fig. 10.
Fig. 7

The finger vein images captured through ten different light intensities. a Duty of PWM is 10%. b Duty of PWM is 20%. c Duty of PWM is 30%. d Duty of PWM is 40%. e Duty of PWM is 50%. f Duty of PWM is 60%. g Duty of PWM is 70%. h Duty of PWM is 80%. i Duty of PWM is 90%. j Duty of PWM is 100%

Fig. 8

The infrared multi-light-intensity finger’s vein image capturing platform

Fig. 9

The transmitting model curve of the samples

Fig. 10

The differential curve of Fig. 9

In the fusion step, the three finger vein images are selected to the weighting fuse [17, 18], which is Fig. 7c–e. Each of them has been divided into ten blocks by the column shown in Fig. 11. According to the transmitting model curve, the most suitable blocks are blending to one finger vein image, which is shown in Fig. 12. The weighting value of \(S_{rp}\) can be calculated by Eq. (9), which is shown in Fig. 13. The weighting value of \(G_{rp}\) could be calculated by Eq. (10), which is shown in Fig. 14. The joint weighting value of \(w_{rp}\) can be calculated by Eq. (11), which is shown in Fig. 15. The fusion finger vein image is blended by Eq. (12), which is shown in Fig. 16.
Fig. 11

The three finger vein images are selected from Fig. 7, and each of them has been divided into 10 blocks

Fig. 12

The blending of one finger’s vein image from the most suitable blocks from Fig. 11a–c

Fig. 13

The weighting value of \(S_{rp}\) to Fig. 11

Fig. 14

The weighting value of \(G_{rp}\) to Fig. 11

Fig. 15

The weighting value of \(\omega_{rp}\) to Fig. 11

Fig. 16

The fused finger vein image of Fig. 11

Two other fuse methods are tested for the performance comparison in this paper. One is discrete wavelet transform (DWT) and the other is contrast pyramid, which flow charts are shown in Fig. 17. The source images are decomposed by discrete wavelet transform. And chooses the max coefficient at each pixel before the image rebuild. The source images are pyramid decomposed by the down sample. And calculate the contrast at each pixel. The pyramid layer which has max contrast value is choice before the pyramid image rebuild.
Fig. 17

The flow chats of DWT and contrast pyramid fuse. a Discrete wavelet transform fuse flow chart. b Contrast pyramid fuse flow chart

The fused performance is tested by the following statistics method [3]. The standard deviation of an image is defined as formula (14), \(\mu\) is the mean value of the image \(I\) in which the size is \(m \times n\) and \(\sigma\) is the standard deviation.
$$\begin{aligned} \mu = &\,\, \frac{1}{{m*n}}\sum\limits_{{x = 1}}^{m} {\sum\limits_{{y = 1}}^{n} {{\text{I}}(x,y){\mkern 1mu} } } \\ \sigma =& \, \sqrt {\frac{1}{{m*n}}\sum\limits_{{x = 1}}^{m} {\sum\limits_{{y = 1}}^{n} {({\text{I}}(x,y) - \mu )} } } \end{aligned}$$
The Shannon Information entropy of the image is defined as formula (15), the \(P(gray)\) is the gray probability of the pixel in the image \(I\):
$$H(\text{I} ) = - \sum\limits_{gray = 1}^{255} {P(gray)\log_{2} [P(gray)]}$$
The standard deviation and information entropy of multi-light-intensity finger vein images together with the fused image by proposed method are shown in Table 1 [3]. However, the standard deviation and information entropy of the fused image is less than Figs. 2, 3, and 4, that means the gray uniformity and consistency of the fused image is better than Figs. 2, 3, and 4. For the image of Fig. 10a, its gray contrast is quite low, in which the image is nearly under exposure.
Table 1

The pixel level statistics of the multi-light-intensity images and proposed fused image


Standard deviation

Information entropy

Figure 11a



Figure 11b



Figure 11c



Proposed fuse image



DWT fuse image



Contrast pyramid image



The degree of dependence between one source image and the fused image could be measured by the mutual information (FMI), which can be calculated by the formula (16):
$$FMI = \sum\limits_{i = 1}^{4} {MI(I_{i} ,I_{f} )}$$
In the formula (16), \(MI(I_{i} ,I_{f} )\) is defined as formula (17), and the joint histogram between the source image \(I_{i}\) and the fused image \(I_{f}\) is defined as \(h(I_{i} ,I_{f} )\).
$$MI(I_{i} ,I_{f} ) = \sum\limits_{x = 1}^{m} {\sum\limits_{y = 1}^{n} {h(I_{i} (x,y),I_{f} (x,y))} \cdot } \log_{2}\left (\frac{{h(I_{i} (x,y),I_{f} (x,y))}}{{h(I_{i} (x,y)) \cdot h(I_{f} (x,y))}}\right)$$
The results of fusion mutual information (MI) between the source image and the fused image are shown in Table 2 [3]. The MI between the three source images and fused image is the sum of the MI of each source image and fused image.
Table 2

The fusion mutual information between the source image and the fused images


Figure 11a

Figure 11b

Figure 10c

Figure 11a–c

Proposed fuse image





DWT fuse image





Contrast pyramid image





The information fused from the source images could be calculated as the fusion quality index (FQI), which could be calculated by Eq. (18).
$$FQI = \sum\limits_{w \in W} {c(w)\left(\sum\limits_{i - 1}^{4} {\lambda (i)QI(I_{i} ,I_{f} |w)} \right)} ,$$
where \(\lambda_{i}\) is computed over a window \(w\), which can be calculated by the formula (19):
$$\lambda_{i} = \sigma_{{_{{I_{i} }} }}^{2} \bigg /\sum\limits_{i = 1}^{4} {\sigma_{{_{{I_{i} }} }}^{2} }$$
\(c\left( w \right)\) is a normalized version of \(C\left( w \right)\), which can be calculated by the formula (20):
$$C(w) = \hbox{max} (\sigma_{{_{{I_{1} }} }}^{2} ,\sigma_{{_{{I_{2} }} }}^{2} , \ldots ,\sigma_{{_{{I_{4} }} }}^{2} )$$

\(QI\left( {I_{i} ,I_{f} \left| w \right.} \right)\) is the quality index over a window for a given source image and fused image.

In the test, the size of the window is 8 × 8. The FQI values of the fusion quality index are shown in Table 3 [3].
Table 3

The fused mutual information between the source image and the fused images

The fused mutual information

Proposed fused image

DWT fused image

Contrast pyramid image





In order to compare the fused performance, the structural similarity index measure (SSIM) is applied in this test. The results are shown in Table 4 [3].
Table 4

The fused mutual information between the source image and the fused images


Figure 10a

Figure 10b

Figure 10c

Proposed fused image




DWT fuse image




Contrast pyramid image




The results of Tables 1, 2, 3 and 4 show that the proposed fused method based on the column blocking of the image is effective applied to the infrared multi-light-intensity finger vein images.

Conclusions and further works

The infrared finger-transmitting model is proposed in this paper, which it could be easily built by the observed data of multiple light-intensity images. This model provides a better approach to get the intact vein patterns by adopting the vein biometric data captured by the bioinformation. The features of captured image are estimated and fused by using this model’s differential curves. In this paper, the examination approach has been proven that it is an efficient and practical method for the finger’s fusion approach via infrared transmitting model. It is suitable for fusion of the infrared images in biometric system. Finally, the applications in detail and their analyses on while applying the multi-light-intensity finger vein images’ fusion which is based on the transmitting model to big data environments will be stated in future works.


Authors’ contributions

The authors’ contributions are summarized below. LC have made substantial contributions to conception and design, involved in drafting the manuscript. ZL and YW have made the acquisition of data and analysis and interpretation of data. The critically important intellectual contents of this manuscript have been revised by HCC. All authors read and approved the final manuscript.


This study was funded in part by the Natural Science Foundation Project of CQ CSTC (cstc2011jjA40012), Foundation and Frontier Project of CQ CSTC (cstc2014jcyjA40006), and Campus Research Foundation of Chongqing University of Science and Technology (CK2011B09, CK2011B05). This work was also supported in part by Asia University, Taiwan, and China Medical University Hospital, China Medical University, Taiwan, under Grant ASIA-105-CMUH-04.

Competing interests

The authors declare that they have no competing interests.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Chongqing University of Science and Technology
Dept. of Computer Science and Information Engineering, Asia University
Dept. of Medical Research, China Medical University Hospital, China Medical University


  1. Shin KY, Park YH, Nguyen DT (2014) Finger-Vein image enhancement using a fuzzy-based fusion method with gabor and retinex filtering. Sensors 14(2):3095–3129View ArticleGoogle Scholar
  2. Tistarelli M, Schouten B (2011) Biometrics in ambient intelligence. J Ambient Intell Human Comput 2(2):113–126View ArticleGoogle Scholar
  3. Liukui C, Zuojin L, Ying W, Lixiao F (2014) A principal component analysis fusion method on infrared multi-light-intensity finger vein images, BWCCA. pp 281–286Google Scholar
  4. Kikuchi H, Nagai K, Ogata W, Nishigaki M (2010) Privacy-preserving similarity evaluation and application to remote biometrics authentication. Soft Comput 14(5):529–536View ArticleMATHGoogle Scholar
  5. Greene CS, Tan J, Ung M, Moore JH, Cheng C (2014) Big data bioinformatics. J Cell Physiol 229(12):1896–1900View ArticleGoogle Scholar
  6. Ogiela MR, Ogiela L, Ogiela U (2015) Biometric methods for advanced strategic data sharing protocols. In: Barolli L, Palmieri F, Silva HDD, et al. (eds) 9th international conference on innovative mobile and internet services in ubiquitous computing (IMIS), Blumenau. pp 179–183Google Scholar
  7. Ogiela MR, Ogiela U, Ogiela L (2012) Secure information sharing using personal biometric characteristics. In: Kim TH, Kang JJ, Grosky WI, et al. (eds) 4th international mega-conference on future generation information technology (FGIT 2012), Korea Woman Train Ctr, Kangwondo, South Korea Dec 16–19, 2012, Book series: Communications in computer and information science, vol. 353 pp 369–373Google Scholar
  8. Ogiela L, Ogiela MR (2016) Bio-inspired cryptographic techniques in information management applications. In: Barolli L, Takizawa M, Enokido T, et al. (eds) IEEE 30th international conference on advanced information networking and applications (IEEE AINA), Switzerland Mar 23-25, 2016, Book series: International conference on advanced information networking and applications. pp 1059–1063Google Scholar
  9. Chen HC, Kuo SS, Sun SC, Chang CH (2016) A distinguishing arterial pulse waves approach by using image processing and feature extraction technique. J Med Syst 40:215. doi:10.1007/s10916-016-0568-4 View ArticleGoogle Scholar
  10. Chen W, Er MJ, Wu S (2006) Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domai. IEEE Trans Syst Man Cybern B (Cybernetics) 36(2):458–466View ArticleGoogle Scholar
  11. Wu X, Zhu X, Wu GQ, Ding W (2014) Data mining with big data. IEEE Trans Knowl Data Eng 26(1):97–107View ArticleGoogle Scholar
  12. Urbach R (1969) The biologic effects of ultraviolet radiation. Pergamon Press, New York.
  13. Chen LK, Li ZJ, Wu Y, Xiang Y (2013) Dynamic range extend on finger vein image based on infrared multi-light-intensity vascular imaging. MEIMEI2013. ChongQing, vol. 427–429, pp 1832–1835Google Scholar
  14. Jacobs K, Loscos C, Ward G (2008) Automatic high-dynamic range image generation for dynamic scenes. IEEE Comput Gr Appl 28(2):84–93View ArticleGoogle Scholar
  15. Debevec PE, Malik J (1997) Recovering high dynamic range radiance maps from photographs. In: Whitted T, Mones-Hattal B, Owen SG (eds) Proc. of the ACM SIGGRAPH. ACM Press, New York, pp 369–378Google Scholar
  16. Rovid A, Hashimoto T, Varlaki P (2007) Improved high dynamic range image reproduction method. In: Fodor J, Prostean O (eds) Proc. of the 4th Int’l Symp. on applied computational intelligence and informatics, IEEE Computer Society, Washington. pp 203–207Google Scholar
  17. Yang J, Shi Y (2014) Towards finger-vein image restoration and enhancement for finger-vein recognition. Inf Sci 1(268):33–52View ArticleGoogle Scholar
  18. Zhang J, Dai X, Sun QD, Wang BP (2011) Directly fusion method for combining variable exposure value images (in Chinese). J Software 22(4):813–825 (in Chinese) MathSciNetView ArticleGoogle Scholar
  19. Delpy DT, Cope M (1997) Quantification in tissue near-infrared spectroscopy. Philos Trans R Soc B Biol Sci 352:649–659View ArticleGoogle Scholar


© The Author(s) 2017