This section describes the proposed method for contour objects. The term ‘contextaware’ [42] refers to context as locations, identities of nearby people and objects, and changes to those objects.
Most of previous definitions of context are available in literature [43] that contextaware looks at who’s, where’s, when’s and what’s of entities and uses this information to determine why the situation is occurring. Here, our definition of context is:
“Context is any information that can be used to characterize the situation of an image such as: pixel, noise, strong edge, and weak edge in a medical image that is considered relevant to the interaction between pixels and pixels, including noise, weak and strong edge themselves.”
In image processing, if a piece of information can be used to characterize the situation of a participant in an interaction, then that information is context. Contextual information can be stored in feature maps on themselves. Contextual information is collected over a large part of the image. These maps can encode highlevel semantic features or lowlevel image features. The lowlevel features are image gradients, texture descriptors and shape descriptors information [42, 44].
The proposed algorithm is of three steps: preprocessing of images, Daubechies complex wavelet filter bank and context aware closed contour with boundary information. The goal of the second step is to detect the dominant edge points so that the resulting image will be composed of textures separated by the edges. We use Daubechies complex wavelet transform for edge detection that can act as the local edge detectors. The imaginary components of complex wavelet coefficients represent strong edges. Using a threshold parameter weak edges is wiped out. It works as a structure preserving noise removal process as well. Since we need to find the coordinates of the edges after this process, we use contour lines for that purpose since they provide closed edge curves which will ease the process when computing in the wavelet domain. Here, we use BSpline contour lines. Steps of the proposed method are as follows in Fig. 1:
Firstly, preprocessing of images. The collected images are scale normalized to 256 × 256 pixel, 512 × 512 pixel dimensions in order to reduce complexity.
Secondly, Daubechies complex wavelet filter bank. For Daubechies complex filter bank computation in the proposed method, Daubechies decomposition proceeds through two main periods: reconstruction of the signal from the coefficients and energy formulation to define strong point.
Finally, context aware closed contour with boundary information. Here, we use BSpline contour lines, which covers the object.
Reconstruction of the signal from the coefficients
According to the multiresolution analysis with tensor product bases, an image f(x, y) is projected onto some “approximation” spaces generated by the dyadic translations of the scaled function φ(x) and φ(y) (at the resolution scale jmax of the original image). If we denote the complex projection coefficients by
$$ {c}_{x,y}^{j_{\max }}={h}_{x,y}^{j_{\max }}+i{g}_{x,y}^{j_{\max }} $$
(3.1)
then we can estimate \( {h}_{x,y}^{j_{\max }} \) and \( {g}_{x,y}^{j_{\max }} \) with the following steps of the iterative procedure:

1.
Start from the usual approximation:
$$ {h}_{x,y}^{j_{\max }}=I\left(x,y\right) $$
(3.2)

2.
Evaluate \( {h}_{x,y}^{j_{\max }+1} \) using a onelevel synthesis operation with the real part of the inverse symmetric Daubechies wavelet kernel only.

3.
Make a onelevel complex wavelet transform. The result is a quite accurate estimation of the real and imaginary parts of the projection coefficient \( {c}_{x,y}^{j_{\max }} \). In the first approximation,
$$ {h}_{x,y}^{j_{\max }}\cong I\left(x,y\right) $$
(3.3)
and \( {g}_{x,y}^{j_{\max }} \) is proportional to the Laplacian of the f(x, y).
A Nlevel wavelet transform W can be represented as
$$ \left\{{c}_{x,y}^{j_{\max }}\right\}\overset{W}{\to}\left\{\begin{array}{cccc}\hfill {c}_{x,y}^{j_{\max }N},\hfill & \hfill {d}_{x,y}^{j_{\max }N},\hfill & \hfill ....\hfill & \hfill {d}_{x,y}^{j_{\max }1}\hfill \end{array}\right\} $$
(3.4)
where the quantities \( {d}_{x,y}^{j_{\max }k} \) represent the set of coefficients for the three wavelet sectors. The complex scaling wavelet coefficients \( {c}_{x,y}^{j_{\max }N} \) result from the nested actions of the complex lowpass filter.
To solve the snake problem numerically, we express its cubic Spline solution using the standard BSpline expansion
$$ {s}^{*}(x)={\displaystyle \sum_{k\in Z}c(k){\beta}^3\left(xk\right)} $$
(3.5)
where c(k) are the BSpline coefficients, and the generating function is the cubic BSpline given by
$$ {\beta}^3(x)=\left\{\begin{array}{c}\hfill 2/3+{\leftx\right}^3/2{x}^2,\begin{array}{cc}\hfill \hfill & \hfill 0\le \leftx\right<1\hfill \end{array}\hfill \\ {}\hfill {\left(2\leftx\right\right)}^3/6,\begin{array}{cccc}\hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill 1\le \leftx\right<2\hfill \end{array}\hfill \\ {}\hfill 0,\begin{array}{ccccccc}\hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill 2\le \leftx\right\hfill \end{array}\hfill \end{array}\right. $$
(3.6)
Using the basic convolution and differentiation rules of Splines [45], we obtain the explicit formula
$$ \xi (s)={\displaystyle \sum_{k\in Z}V\left(k,\left({b}_1^3*c\right)(k)\right)+\lambda {\displaystyle \sum_{k\in Z}\left({b}_1^3*{d}^{(2)}*c\right)(k)\left({d}^{(2)}*c\right)(k)}} $$
(3.7)
where * denotes the discrete convolution operator and the kernels b13(discrete cubic Bspline) and d(2) (second difference) are defined by their ztransform as follows [45]:
\( {B}_1^3(z)=\left(z+4+{z}^{1}\right)/6 \) and D
^{(2)}(z) = z − 2 + z
^{− 1} (3.8)
We have now replaced the integral in the second term by a sum, which is much more computationally tractable. The task is then to minimize Eq. (3.7), which is typically achieved by differentiation with respect to c(k).
The Spline snake Eq. (3.5) has as many degrees of freedom (BSpline coefficients) as there are discrete contour points, i.e., one per integer grid point. In Eq. (3.7), if λ is sufficiently small, then the Spline will interpolate exactly. Conversely, the use of larger values of λ will have the effect of stiffening the Spline and smoothing out the discontinuities of the unconstrained contour curve f(x). It is also necessary to mention that λ can eventually be dropped by using a variable size knot spacing, which still assures smoothness.
The argument is essentially the same for more general curves in the plane, which are described using two Splines instead of one. Specifically, we represent a general BSpline snake as follows:
$$ {s}_h(t)=\left({s}_x(t),{s}_y(t)\right)={\displaystyle \sum_{k\in Z}c(k).{\beta}^n\left(\frac{t}{h}k\right)\begin{array}{ccc}\hfill \hfill & \hfill \hfill & \hfill 0\le t\le {t}_{\max }=hN\hfill \end{array}} $$
(3.9)
where sx(t) and sy(t) are the x and y Spline components, respectively; these are both parameterized by the curvilinear variable. The exact value of tmax, which marks the end of the curve, is dictated by the desired resolution of the final discrete curve; by convention, we do only render the curve points for t integer. This 2D Spline snake is characterized by its vectorsequence of BSpline coefficients c(k) = (cx(k), cy(k)). Note that there are only N = tmax/h primary coefficient vectors, each corresponding to a Spline knot on the curve; the other coefficient values are deduced using some prescribed boundary conditions. Clearly, if we specify N, the above automatically defines the knot spacing h and therefore the smoothness constraint for the curve.
Assuming a curve representation by M = tmax discrete points, we obtain h = M/N. The freedom of the Spline curve has been reduced by the same amount, resulting in a smoothing and stiffening of the curve. Increasing the number N of node points will reduce the knot spacing, and consequently it will reduce the smoothing effect of the curve.
Energy formulation
The external potential function is typically given by a smoothed version of the gradient of the input data [45, 46]
$$ g\left(x,y\right)=\sqrt{{\left(\frac{\partial }{\partial x}\varphi *f\right)}^2+{\left(\frac{\partial }{\partial y}\varphi *f\right)}^2} $$
(3.10)
where f denotes the input image and φ is a smoothing kernel; for example, a Gaussian. Our cost function is the summation of the gradient (external force) over the path of the curve s(x) sampled at M consecutive points
$$ \xi \left(c(k)\right)={\displaystyle \sum_{i=0}^{M1}g\left(s(i)\right)} $$
(3.11)
For the cost function to be a good approximation of the curvilinear integral, we typically select M sufficiently large so that the curve points are connected (i.e., within a distance of one pixel of each other). However, we note that the exact value of M is not critical; a less dense sampling may be used to increase optimization speed. The negative sign in Eq. (3.11) is used because we employ a minimization technique for the optimization.
The problem consists in evaluating Eq. (3.9) at M discrete points. Such an evaluation is necessary for the computation of the energy function Eq. (3.11) and for the display of the curve (where M may typically be chosen larger). Therefore, the continuous variable t is replaced by a discrete variable i, 0 ≤ i < M. The value of M and the number N of given node point directly determines the knot spacing h. The discrete Bspline snake with N node points and curve points is given as
$$ s(i)={\displaystyle \sum_{k\in Z}c(k).{\beta}^n\left(\frac{i}{h}k\right)}\begin{array}{ccc}\hfill, \hfill & \hfill \hfill & \hfill h=\frac{M}{N}\hfill \end{array} $$
(3.12)
Below, we present two different ways for fast curve rendering by digital filtering.
(i) Interpolation: The most straightforward way is interpolation. The BSpline function is evaluated at every position (i/h  k) multiplied by the corresponding BSpline coefficient and summation. BSplines are of compact support, and therefore, the summing needs only to be carried out over a subset of all coefficients. To interpolate the curve at a point i, only the coefficients c(k)
$$ \left[\frac{i}{h}\frac{n+1}{2}\right]\le k\le \left[\frac{i}{h}+\frac{n+1}{2}\right] $$
(3.13)
need to be included in the sum ( [.] denotes integer truncation).
The main computational drawback of this procedure is that the function Eq. (3.6) needs to be evaluated for each term in the sum.
(ii) Digital Filtering: The above described algorithm works for any combination of values of M and N. If we can impose M such that h is an integer value, a much more efficient algorithm can be described. In general, this requirement is easily met, it is not critical and can be loosely chosen. The simplification is based on a convolution property for BSplines [45]. It states that any Spline of degree n and knot spacing h (integer) can be represented as the convolution of n + 1 moving average filters of size h followed by a Spline of knot spacing one. Hence, three successive steps can obtain the curve points:

Upsampling of the BSpline coefficients;

Averaging by (n + 1) moving average filters of size h;

Filtering by a unit BSpline kernel of degree n.
This algorithm can be implemented with as few as two multiplications and two additions per node point plus (2n) additions per computed contour coordinate. Generally, it is faster and also at least a factor of two better than the Oslo knot insertion algorithm commonly used in the computer graphics.
Border conditions
Appropriate boundary conditions are necessary for the computation of Eq. (3.9) and Eq. (3.10) [45]. In the following, we distinguish the cases of a close snake and an open snake.
(i) Close Snake Curve: For a set of node points n(k), k = 0,1,…., N1 we require that n(N) = n(0) and n(−1) = n(N1). The corresponding boundary conditions are periodic. The extended signal n
_{
s
}
(k) of infinite length can be described as
$$ {n}_s(k)=n\left(k \mod N\right) $$
(3.14)
(ii) Open Snake Curve: Different choices can be implemented for the open snake such as mirror or antimirror boundary conditions. In this application, the antimirror conditions with a pivot at the boundary value are the most suitable choice because they allow us to lock the end points of the curve.
These antimirror conditions are such that
$$ \left(n\left({k}_0+k\right)n\left({k}_0\right)\right)=\left(n\left({k}_0\right)n\left({k}_0k\right)\right) $$
(3.15)
where k0 Є {0, N1}. Since the extended signal has a center of antisymmetry at the boundary value, this value will be preserved exactly whenever the filter applied is symmetric, which turns out to be the case here. However, a new boundary value cannot be defined as the lookup of an existing signal value, which makes the implementation slightly more complicated.
From Fig. 2 and many cases of the other image tests, we observed that the proposed method accurately detects contour.