This model is extended and modified from Ref. [3]. The steps of basic works from bioinformation to the biometric data in this model are described in “Basic works from bioinformation to the biometric data” section, and its single infrared transmitting model is described in “A single infrared transmitting model” section.

### Basic works from bioinformation to the biometric data

The applications of the biometric data includes personal identification and disease diagnosis. The system architecture is illustrated from the bioinformation to the biometric data for a single infrared transmitting model in biometric system shown in Fig. 1. Obviously, the capturing, digitizing and normalizing methods of bioinformation should be efficient in order to record the complete pattern or texture feature information, uniform gray distribution and contrast before their applications. This paper presents a single transmitting model of finger vein imaging in Biometric system and use it for fuse the multi-light-intensity finger vein images to one image, which integrates the vein pattern information of each source image and keeps the complete vein pattern information.

### A single infrared transmitting model

The single infrared transmitting model described in this subsection. It is popular to use near-infrared (NIR) light transmitting the finger to achieve the angiogram imaging. Because the oxyhemoglobin content (HbO) in the venous blood is far beyond the arterial blood and other tissue, such as fat and muscle, the wavelength of the transmitting light absorption should be relatively high. Thus, the 760–1100 nm is suitable for the angiogram imaging from the absorption rate of the water, oxyhemoglobin (HbO) and deoxyhemoglobin (Hb), which is shown in Fig. 2. This higher absorption property of HbO results in that the region of vein pattern is darker than other surrounding region after the NIR light transmitting the finger. This technology is widely used in the vascular vein imaging of breast and cerebral.

The tissue optical properties have been modeled based upon photon diffusion theory. The epidermis (the outermost layer of skin) only accounts for 6% of scattering and can be regarded a primary absorptive medium. Therefore, a simplified model on the reflectance of blood and tissue considers the reflectance from only the scattering tissue beneath the epidermis [12]. The skin is assumed to be a semi-infinite homogeneous medium, under a uniform and diffusive illumination. The photon has a relatively long residence time which allows the photon to engage in a random walk within the medium. The photon diffusion depends on the absorption and scattering properties of the skin, which penetration depth for different wavelengths shown in Fig. 3.

Consider all these factors: the tissue (water, Hb and HbO) absorption in the vein, the depth of penetration. The infrared wave band of finger vein imaging is about 850 nm in practice.

Because the thickness of the finger is a nonlinear variable, it is hard to only use invariable light intensity to vein imaging at infrared 850 nm. Thus, overexposure and underexposure often appear in the infrared finger vein images. And these areas with over/under exposure can’t be enhanced, which cause the vein pattern lack in the biometric data extraction. An infrared multi-light-intensity finger vein imaging technology is used in the paper [13] to solve the problem, which extends the dynamic range of the infrared vein imaging [14]. Additionally, it is necessary to fuse the complementary vein information in the next process. This paper presents a calculation method of the infrared finger vein transmitting model based on the multi-light-intensity imaging. The model presents the monotone increasing nonlinear function relationship between the light-intensity and pixel-gray value, which can be built by the genetic algorithm and used in the imaging quality estimation of the pixel-level fusion to infrared multi-light-intensity finger vein images.

The infrared finger vein transmitting model [3] is defined as:

\(X\) is the irradiance of the infrared light of transmitting the finger, and \(B\) is represented as a pixel gray value. Generally, the gray level of the pixel is 8 bits. The infrared finger vein transmitting model function [3] is explicitly written as

$$\left\{ { \, \begin{array}{l} {B_{\text{min} } = 0, } \\ {B = f(X),} \\{B_{\text{max} } = 255,} \\ \end{array} \begin{array}{l} {if} \\ {if} \\ {if} \\ \end{array} \begin{array}{l} {X \le X_{\text{min} } } \\ {X_{\text{min} } < X < X_{\text{max} } } \\ {X_{\text{max} } \le X } \\ \end{array} } \right.$$

(2)

Assume there are \(N\) vein images captured under increasing light intensity \(X_{p} ,p = 1, \ldots ,N\). The size of each image is \(m \times n\), sign \(K = m \times n\). The *q*th pixel of the *p*th light-intensity image will be denoted \(B_{pq}\), the set \(\left\{ {B_{pq} } \right\},p = 1, \ldots ,N{\text{ and }}q \in \{ 1, \ldots ,K\}\), represents the known observations. The goal is to determine the underlying light values or irradiances, denoted by \(X_{q}\), that gave rise to the observations \(B_{pq}\). Because the \(N\) vein images has be properly registered in the pixel level, so that for a particular *a*, the light value \(X_{a}\) contributes to \(B_{pq} ,p = 1, \ldots ,N{\text{ and }}q \in \{ 1, \ldots ,K\}\). For this work, a normalized cross-correlation function is used as the matching criterion to register images to 1/2-pixel resolution [15].

The model can be rewritten as:

$$B_{pq} = f_{q} (X_{pq} ),\begin{array}{*{20}c} & {p = 1, \ldots ,N, \begin{array}{*{20}c} & {q \in \{ 1, \ldots ,K\}. } \\ \end{array} } \\ \end{array}$$

(3)

It means the transmitting model of position *q* is different. Nevertheless the shape of each model is similar, it gives an easy solution to estimate the transmitting model for each pixel for the application.

Since *f* is a monotonic and invertible function, its inverse function could be represented as \(g\).

$$X_{pq} = g_{q} (B_{pq} ), \begin{array}{*{20}c} & {p = 1, \ldots ,N, \begin{array}{*{20}c} & {q \in \{ 1, \ldots ,K\}. } \\ \end{array} } \\ \end{array}$$

(4)

It is necessary to recover the function \(g\) and the irradiances of \(X_{p} ,p = 1, \ldots ,N\), which satisfy the set of equations arising from Eq. (4) in a least-squared error sense. Recovering function \(g\) only requires recovering the finite number of values that \(g\left( B \right)\) could take since the domain of \(X\), pixel brightness values, is finite. Letting \(B_\text{{min}}\) and \(B_\text{{max}}\) be the least and greatest pixel values (integers), \(q\) be the number of pixel locations and \(N\) be the number of photographs, we formulate the problem as one of finding the \(\begin{array}{*{20}c} {[B_{\text{min} } } & {B_{\text{max} } } \\ \end{array} ]\) values of \(g\left( B \right)\) and the \(q\) values of \(X\) that minimize the following quadratic objective function [3]:

$$\xi = \sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{q} {[g(B_{ij} ) - X_{i} ]^{2} } } + \lambda \sum\limits_{{b = B_{\text{min} } + 1}}^{{b = B_{\text{max} } - 1}} {(g''(b))^{2} }$$

(5)

The first term ensures that the solution satisfies the set of equations arising from Eq. (4) in a least squares sense. The second term is a smoothness term on the sum of squared values of the second derivative of \(g\) to ensure that the function \(g\) is smooth; in this discrete setting, the second part can be calculated by the formula (6).

$$g'' = g(b + 1) + g(b - 1) - 2g(b)$$

(6)

This smoothness term is essential to the formulation in that it provides coupling between the values \(g\left( z \right)\) in the minimization. The scalar weights the smoothness term relative to the data fitting term, and should be chosen appropriately for the amount of noise expected in the \(B_{ij}\) measurements.

Because it is quadratic in the \(X_{p}\) and \(g\left( z \right)\)’s, minimizing \(\xi\) is a straightforward linear least squares problem. The overdetermined system of linear equations is robustly solved using the singular value decomposition (SVD) method. An intuitive explanation of the procedure may be found in “The infrared light transmission model of the finger” section and Fig. 2 of reference paper [15].

In the reference paper [16], the noise, in the \(X_{p}\), is an independent Gaussian random variable, in which the variance is \(\sigma^{2}\) and the joint probability density function can be written as:

$$P(X_{B} ) \prec \exp \left\{ { - \frac{1}{2}\sum\limits_{{p,q}} {w_{{pq}} (I_{{B_{{pq}} }} - X_{{pq}} )^{2} } } \right\}$$

(7)

A maximum-likelihood (ML) approach is taken to find the high dynamic range image values. The maximum likelihood solution finds the values \(X_{q}\) that maximize the probability in Eq. (7). Maximizing Eq. (7) is equivalent to minimizing the negative of its natural logarithm, which leads to the following objective function to be minimized:

$$\xi (X) = \sum\limits_{p,q} {w_{pq} (I_{Bpq} - X_{pq} )^{2} }$$

(8)

With Gaussian simplifying approximation, the noise variances \(\sigma_{pq}^{2}\) would be difficult to characterize accurately. Again, detailed knowledge of the image capture process would be required, and the noise characterization would have to be performed each time a different image is captured on a device.

Equation (8) can be minimized by setting the gradient \(\xi \left( X \right)\) equal to zero. But if the \(X_{p}\) were unknown in each pixel, one could jointly estimate \(X_{p}\) and \(X_{q}\) by arbitrarily fixing one of the q positions, and then performing an iterative optimization of Eq. (8) with respect to both \(X_{p}\) and \(X_{q}\). It is difficult to solve these estimating values without the analytic expression of the transmitting model.

From the observed pixels, this paper presents the estimated transmitting model curve by the sliding the sampled curve segments, and blending these to a monotone increasing curve based on the genetic algorithm. So, if the blending curve is built or fit and then the other function curve can be redrawn by several sample points. It is possible to recover the blending curve shown in Fig. 4. The mixed complete curve \(g\) can be used to get the transmitting model function \(f\), which is shown in Fig. 5.