这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Mapping-Based Image Diffusion

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Åström, F., Baravdish, G., Felsberg, M.: On tensor-based PDEs and their corresponding variational formulations with application to color image denoising. In: ECCV12, vol. 7574, pp. 215–228. LNCS, Springer, Berlin (2012)

  2. Åström, F., Baravdish, G., Felsberg, M.: A tensor variational formulation of gradient energy total variation. In: Tai, X.C., Bae, E., Chan, T., Lysaker, M. (eds.) Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR), LNCS, vol. 8932, pp. 307–320. Springer (2015)

  3. Åström, F., Felsberg, M.: On the choice of tensor estimation for corner detection, optical flow and denoising. In: Jawahar, C., Shan S. (eds.) Computer Vision—ACCV 2014 Workshops, LNCS, vol. 9009, pp. 16–30. Springer (2015)

  4. Åström, F., Felsberg, M., Baravdish, G., Lundström, C.: Targeted iterative filtering. In: SSVM2013. pp. 1–11. Springer, Berlin (2013)

  5. Åström, F., Zografos, V., Felsberg, M.: Density driven diffusion. In: SCIA, pp. 17–20 (2013)

  6. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. Applied Mathematical Sciences, vol. 147, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  7. Ballester, C., Bertalmío, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. 10(8), 1200–1211 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  8. Baravdish, G., Svensson, O., Åström, F.: On backward \(p(x)\)-parabolic equations for image enhancement. Numer. Funct. Anal. Optim. 36(2), 147–168 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. den Bergh, M.V., Boix, X., Roig, G., de Capitani, B., Gool, L.V.: SEEDS: superpixels extracted via energy-driven sampling. In: Fitzgibbon, A.W., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV12, LNCS, vol. 7578, pp. 13–26. Springer (2012)

  10. Bigün, J., Granlund, G.H.: Optimal orientation detection of linear symmetry. In: Proceedings of the IEEE First ICCV, pp. 433–438. Great Britain, London (1987)

  11. Black, M.J., Rangarajan, A.: On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. IJCV 19(1), 57–91 (1996)

    Article  Google Scholar 

  12. Blomgren, P., Chan, T.F., Mulet, P., Wong, C.: Total variation image restoration: numerical methods and extensions. In: Proceedings of IEEE ICIP, pp. 384–387 (1997)

  13. Bollt, E.M., Chartrand, R., Esedoğlu, S., Schultz, P., Vixie, K.R.: Graduated adaptive image denoising: local compromise between total variation and isotropic diffusion. Adv. Comput. Math. 31(1), 61–85 (2008)

    MathSciNet  MATH  Google Scholar 

  14. Bovik, A.C., Maragos, P.: Conditions for positivity of an energy operator. Signal Process. IEEE Trans. 42(2), 469–471 (1994)

    Article  Google Scholar 

  15. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)

    Book  MATH  Google Scholar 

  16. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Image Sci. 3(3), 492–526 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Bruhn, A., Weickert, J., Feddern, C., Kohlberger, T., Schnörr, C.: Variational optical flow computation in real time. Image Process. IEEE Trans. 14(5), 608–615 (2005)

  18. Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: CVPR, vol. 2, pp. 60–65 (2005)

  19. Chan, T., Esedoğlu, S., Park, F., Yip, A.: Total variation image restoration: overview and recent developments. In: Paragios, N., Chen, Y., Faugeras, O. (eds.) Handbook of Mathematical Models in Computer Vision, chap. 2, pp. 17–31. Springer (2006)

  20. Chan, T.F., Golub, G.H., Mulet, P.: A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20(6), 1964–1977 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  21. Chan, T.F., Kang, S.H., Kang Shen, J.: Euler’s elastica and curvature based inpaintings. SIAM J. Appl. Math. 63, 564–592 (2002)

    MathSciNet  MATH  Google Scholar 

  22. Chan, T.F., Wong, C.K.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998)

    Article  Google Scholar 

  23. Chen, Y., Levine, S., Rao, M.: Variable exponent, linear growth functionals in image processing. SIAM J. Appl. Math. 66, 1383–1406 (2004)

    Article  MATH  Google Scholar 

  24. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  25. DiBenedetto, E.: Degenerate Parabolic Equations. Springer, New York (1993)

    Book  MATH  Google Scholar 

  26. Felsberg, M.: Autocorrelation-driven diffusion filtering. Image Process. IEEE Trans. 20(7), 1797–1806 (2011)

    Article  MathSciNet  Google Scholar 

  27. Felsberg, M., Granlund, G.: POI detection using channel clustering and the 2D energy tensor. In: Pattern recognition: 26th DAGM symposium. Volume 3175 of LNCS, pp. 103–110 (2004)

  28. Felsberg, M., Köthe, U.: GET: the connection between monogenic scale-space and Gaussian derivatives. In: Kimmel, R., Sochen, N., Weickert, J. (eds.) Scale Space and PDE Methods in Computer Vision, LNCS, vol. 3459, pp. 192–203. LNCS (2005)

  29. Förstner, W., Gülch, E.: A fast operator for detection and precise location of distinct points, corners and centres of circular features. In: ISPRS Intercommission, Workshop, Interlaken, pp. 149–155. (1987)

  30. Gårding, J., Lindeberg, T.: Direct computation of shape cues using scale-adapted spatial derivative operators. IJCV 17(2), 163–191 (1996)

    Article  Google Scholar 

  31. Gilbarg, D., Trudinger, N.: Elliptic Partial Differential Equations of Second Order. Classics in Mathematics. Springer, Berlin (2001)

    MATH  Google Scholar 

  32. Granlund, G.H., Knutsson, H.: Signal Processing for Computer Vision. Kluwer, Alphen aan den Rijn (1995)

    Book  Google Scholar 

  33. Grasmair, M., Lenzen, F.: Anisotropic total variation filtering. Appl. Math. Optim. 62(3), 323–339 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)

    Article  Google Scholar 

  35. Horn, R.A., Johnson, C.R. (eds.): Matrix Analysis. Cambridge University Press, New York (1986)

    Google Scholar 

  36. Huguet, F., Devernay, F.: A variational method for scene flow estimation from stereo sequences. In: IEEE 11th International Conference on Computer Vision, 2007. ICCV 2007, pp. 1–7 (2007)

  37. Kaiser, J.F.: On a simple algorithm to calculate the ‘energy’ of a signal. In: 1990 International Conference on Acoustics, Speech, and Signal Processing, 1990. ICASSP-90, vol. 1, pp. 381–384 (1990)

  38. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)

    Article  MATH  Google Scholar 

  39. Koenderink, J.J.: The structure of images. Biol. Cybern. 370(50), 363–370 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  40. Krajsek, K., Scharr, H.: Diffusion filtering without parameter tuning : models and inference tools. In: CVPR2010, pp. 2536–2543. San Francisco (2010)

  41. Kuijper, A.: Geometrical PDEs based on second-order derivatives of gauge coordinates in image processing. Image Vis. Comput. 27(8), 1023–1034 (2009)

    Article  Google Scholar 

  42. Ladyženskaja, O.A., Solonnikov, V.A., Ural’ceva, N.N.: Linear and quasilinear equations of parabolic type. Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23. American Mathematical Society, Providence, RI (1968)

  43. Lefkimmiatis, S., Roussos, A., Unser, M., Maragos, P.: Convex generalizations of total variation based on the structure tensor with applications to inverse problems. In: Kuijper, A., Bredies, K., Pock, T., Bischof H. (eds.) SSVM, pp. 48–60 (2013)

  44. Levin, A., Nadler, B., Durand, F., Freeman, W.T.: ECCV 2012, chap. Patch Complexity, Finite Pixel Correlations and Optimal Denoising, pp. 73–86. Springer, Berlin (2012)

  45. Lindeberg, T.: Scale-Space Theory in Computer Vision. Kluwer international series in engineering and computer science: Robotics: vision, manipulation and sensors. Springer (1993)

  46. Nordberg, K.: Signal representation and processing using operator groups. Ph.D. thesis, Linköping University, Sweden (1995)

  47. Oliveira, J.P., Bioucas-Dias, J.M., Figueiredo, M.A.: Adaptive total variation image deblurring: a majorizationminimization approach. Signal Process. 89(9), 1683–1693 (2009)

    Article  MATH  Google Scholar 

  48. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. PAMI 12(7), 629–639 (1990)

    Article  Google Scholar 

  49. Roussos, A., Maragos, P.: Tensor-based image diffusions derived from generalizations of the total variation and beltrami functionals. In: ICIP, pp. 4141–4144 (2010)

  50. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1–4), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  51. Scherzer, O., Weickert, J.: Relations between regularization and diffusion filtering. J. Math. Imaging Vis. 12(1), 43–63 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  52. Schmidt, U., Gao, Q., Roth, S.: A generative perspective on mrfs in low-level vision. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), pp. 1751–1758 (2010)

  53. Sharma, G., Bala, R.: Digital Color Imaging Handbook. Electrical Engineering & Applied Signal Processing Series. Taylor & Francis, Abingdon (2014)

    Google Scholar 

  54. Torralba, A., Oliva, A.: Statistics of natural image categories. Network: Computation in Neural Systems, pp. 391–412 (2003)

  55. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  56. Weickert, J.: Anisotropic Diffusion in Image Processing. Teubner-Verlag, Stuttgart (1998)

    MATH  Google Scholar 

  57. Weickert, J.: Nonlinear diffusion filtering. In: Jähne, B., Haussecker, H., Beissler, P. (eds.) Signal Processing and Pattern Recognition. Handbook of Computer Vision and Applications, chap. 15, pp. 423—451. Academic Press (1999)

Download references

Acknowledgments

We thank the reviewers for their helpful comments and suggestions which have improved this work. This research has received funding from the Swedish Foundation for Strategic Research through the grant VPS and from Swedish Research Council through grants for the projects energy models for computational cameras \((\hbox {EMC}^2)\) and Visualization adaptive Iterative Denoising of Images (VIDI), all within the Linnaeus environment CADICS and the excellence network ELLIIT. Support by the German Science Foundation and the Research Training Group (GRK 1653) is gratefully acknowledged by the first author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Freddie Åström.

Appendices

Appendix 1: Proof of Theorem 1

To prove Theorem 1, compute the variational derivative of the functional (28). The first variation is given by

$$\begin{aligned} \delta R = \left. \frac{\partial }{\partial \varepsilon } R(u+\varepsilon v) \right| _{\varepsilon = 0}. \end{aligned}$$
(82)

Now, we let \(u \mapsto u+\varepsilon v\) in R(u) and we get

$$\begin{aligned} R(u+\varepsilon v)&= \nabla m(u+\varepsilon v)^{\top }W(\nabla m(u+\varepsilon v)) \nabla m(u+\varepsilon v) \nonumber \\&= \underbrace{m'(u+\varepsilon v)^2\nabla (u+\varepsilon v)^{\top }}_{=A} \end{aligned}$$
(83a)
$$\begin{aligned}&\qquad \cdot \underbrace{W(\nabla m(u+\varepsilon v)) \nabla (u+\varepsilon v)}_{=B} , \end{aligned}$$
(83b)

then, from the product rule of differentiation, we need to consider the following terms

$$\begin{aligned} \frac{\partial }{\partial \varepsilon } R(u+\varepsilon v)&= \left( \frac{\partial }{\partial \varepsilon } A(u+\varepsilon v) \right) B \nonumber \\&\qquad + A \left( \frac{\partial }{\partial \varepsilon } B(u+\varepsilon v) \right) . \end{aligned}$$
(84)

To simplify the notation define

$$\begin{aligned} z = \begin{pmatrix} z_1 = m'(u+\varepsilon v)(u_1 + \varepsilon v_1) \\ \vdots \\ z_d = m'(u+\varepsilon v)(u_d + \varepsilon v_d) \end{pmatrix}, \end{aligned}$$
(85)

and note the relation

$$\begin{aligned} z|_{\varepsilon = 0} = \nabla m(u) = s. \end{aligned}$$
(86)

We first consider the differentiation of the B-component w.r.t \(\varepsilon \). In order to differentiate the B-component use the chain rule of differentiation:

$$\begin{aligned} \frac{\partial W(z)}{\partial \varepsilon } = \frac{\partial W(z)}{\partial z_1}\frac{\partial z_1}{\partial \varepsilon } + \cdots + \frac{\partial W(z)}{\partial z_d}\frac{\partial z_d}{\partial \varepsilon }, \end{aligned}$$

then

$$\begin{aligned} \frac{\partial }{\partial \varepsilon } B(z)= & {} \frac{\partial }{\partial \varepsilon } \left( W(z)\nabla (u+\varepsilon v) \right) \nonumber \\= & {} \left( \frac{\partial }{\partial \varepsilon } W(z) \right) \nabla (u+\varepsilon v) + W(z)\nabla v, \end{aligned}$$
(87)

with

$$\begin{aligned} \frac{\partial }{\partial \varepsilon } W(z)&= \Big [W_{z_1}(z) \Big (m''(u+\varepsilon v)v(u_1+\varepsilon v_1) \nonumber \\&\qquad \qquad + m'(u + \varepsilon v) v_1 \Big ) \nonumber \\&\qquad \vdots \nonumber \\&\qquad + W_{z_d}(z) \Big (m''(u+\varepsilon v)v(u_d+\varepsilon v_d) \nonumber \\&\qquad \qquad + m'(u + \varepsilon v) v_d \Big ) \Big ] , \end{aligned}$$
(88)

evaluating the limit \(\varepsilon \rightarrow 0\) in (87) with (88) we get

$$\begin{aligned} \frac{\partial }{\partial \varepsilon } B(z)\bigg |_{\varepsilon = 0}&= \Big [W_{s_1}(s) \Big (m''(u)v u_1 + m'(u) v_1 \Big ) \nonumber \\&\qquad \vdots \nonumber \\&\quad + W_{s_d}(s) \Big (m''(u)v u_d + m'(u) v_d \Big ) \Big ] \nabla u \nonumber \\&\qquad + W(s)\nabla v . \end{aligned}$$
(89)

Next, we focus on the second product in (84). Then, with (89) and after few rewrites, we obtain the following

$$\begin{aligned}&\left. A \left( \frac{\partial }{\partial \varepsilon } B(u+\varepsilon v) \right) \right| _{\varepsilon = 0}\nonumber \\&\quad = m'(u)^2\nabla u^{\top } \Big [W_{s_1}(s) \Big (m''(u)v u_1 + m'(u) v_1 \Big ) \nonumber \\&\quad \qquad \vdots \nonumber \\&\quad \quad + W_{s_d}(s) \Big (m''(u)v u_d + m'(u) v_d \Big ) \Big ]\nabla u \nonumber \\&\quad \quad + m'(u)^2\nabla u^{\top } W(s)\nabla v \end{aligned}$$
(90a)
$$\begin{aligned}&\quad = m'(u)^2\Big [\nabla u^{\top } W_{s_1}(s) \nabla u \Big (m''(u)v u_1 + m'(u) v_1 \Big ) \nonumber \\&\qquad \quad \vdots \nonumber \\&\quad \quad + \nabla u^{\top } W_{s_d}(s) \nabla u \Big (m''(u)v u_d + m'(u) v_d \Big ) \Big ] \nonumber \\&\quad \quad + m'(u)^2\nabla u^{\top } W(s)\nabla v \end{aligned}$$
(90b)
$$\begin{aligned}&\quad = m'(u)^2\Big \langle Q , \Big (m''(u)v \nabla u + m'(u) \nabla v \Big ) \Big \rangle \nonumber \\&\quad \quad + m'(u)^2 \Big \langle \nabla u, W(s)\nabla v \Big \rangle \end{aligned}$$
(90c)
$$\begin{aligned}&\quad = \underbrace{ \Big \langle m'(u)^2 m''(u)Q, \nabla u\Big \rangle }_{=G} v \nonumber \\&\quad \quad + \Big \langle \nabla v, \Big (\underbrace{ m'(u)^3 Q + m'(u)^2W(s)^{\top }}_{=H} \Big ) \nabla u \Big \rangle , \end{aligned}$$
(90d)

where we set

$$\begin{aligned} Q = \begin{pmatrix} \nabla u^{\top } W_{s_1}(s) \nabla u \\ \vdots \\ \nabla u^{\top } W_{s_d}(s) \nabla u \end{pmatrix} = _{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} \nabla u, \end{aligned}$$
(91)

and \( _{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})}\) is defined in (26). The definitions G and H will be used in the application of Green’s formula (17).

Now we focus on the differentiation of the A-component w.r.t. \(\varepsilon \) in (83a), i.e., we set \(u \mapsto u+\varepsilon v\) and get

$$\begin{aligned}&\frac{\partial }{\partial \varepsilon } A(u+\varepsilon v) \nonumber \\&\quad = 2m'(u+\varepsilon v)m''(u+\varepsilon v) v \nabla (u+\varepsilon v)^{\top } \nonumber \\&\qquad + m'(u+\varepsilon v)^2 \nabla v^{\top } . \end{aligned}$$
(92)

With (92) we evaluate the limit \(\varepsilon \rightarrow 0\) of (83a) which results in

$$\begin{aligned}&\left. \left( \frac{\partial }{\partial \varepsilon } A(u+\varepsilon v) \right) B \right| _{\varepsilon = 0} \nonumber \\&\quad = \Big ( 2m'(u)m''(u) v \nabla u^{\top } + m'(u)^2 \nabla v^{\top } \Big ) W(\nabla m(u)) \nabla u \nonumber \\&\quad = \left( \underbrace{2m'(u)m''(u) \nabla u^{\top } W(\nabla m(u)) \nabla u}_{E} \right) v \nonumber \\&\qquad + \nabla v^{\top } \left( \underbrace{m'(u)^2 W(\nabla m(u))}_{F} \right) \nabla u , \end{aligned}$$
(93)

where the definitions E and F will be used in the application of Green’s theorem. Summing (90d) and (93) and making use of GH and EF yields

$$\begin{aligned} \left. \frac{\partial }{\partial \varepsilon } R(u+\varepsilon v) \right| _{\varepsilon = 0}&= \int _\varOmega (G+E)v \; \hbox {d}{\varvec{x}} \nonumber \\&\quad + \int _\varOmega \nabla v^{\top } (H+F)\nabla u \; \hbox {d}{\varvec{x}}. \end{aligned}$$
(94)

The second integral of (94) is now on the form \(\nabla v^\top A(\nabla u)\), i.e., we can integrate it w.r.t. \(\nabla v\) by using Green’s formula (cmp. (17)) and get

$$\begin{aligned}&\int _\varOmega \nabla v^{\top } (H+F)\nabla u \; {\hbox {d}{\varvec{x}}} \nonumber \\&\quad = \int _{\partial \varOmega } v ({\varvec{n}}\cdot (H+F)\nabla u) \;\hbox {d}S\nonumber \\&\qquad - \int _{\varOmega } v {\text {div}} \left( (H+F)\nabla u \right) \;{\hbox {d}{\varvec{x}}} , \end{aligned}$$

where the natural Nuemann boundary condition is given by \({\varvec{n}}\cdot (H+F) = 0\). Substituting GHEF defined in (90d) and (93) into the above expression, and after rearranging the terms and dropping the parentheses for improved clarity, we get

$$\begin{aligned} \left\{ \begin{array}{ll} m'm'' \nabla u^{\top } \Big [ 2W + _{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} m' \Big ] \nabla u \\ - {\text {div}} \left( (m')^2 \Big ( W + W^{\top } + m' _{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} \Big )\nabla u \right) = 0 &{}\quad \hbox {in}\,\varOmega \\ n \cdot (F + H)\nabla u = 0 &{} \quad \hbox {on} \,\partial \varOmega \end{array} \right. \end{aligned}$$
(95)

The above E–L equation can be simplified by considering the following expansion

$$\begin{aligned}&{\text {div}} \left( (m')^2[2W+_{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} m']\nabla u \right) \nonumber \\&\qquad = 2m'm''\nabla u^{\top } [2W+_{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} m']\nabla u \nonumber \\&\quad \quad \quad + (m')^2{\text {div}} \left( (2W+_{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} m')\nabla u \right) . \end{aligned}$$
(96)

Substituting the first term of the right-hand side of (96) into the PDE of (95) we get:

$$\begin{aligned}&{\text {div}} \left( (m')^2[2W+_{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} m']\nabla u \right) \nonumber \\&\quad - (m')^2{\text {div}} \left( [2W+_{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} m']\nabla u \right) \nonumber \\&\quad - 2{\text {div}} \left( (m')^2 \Big [ W + W^{\top } + m' _{\nabla u}{W_{{\varvec{s}}}({{\varvec{s}}})} \Big ]\nabla u \right) = 0 . \end{aligned}$$
(97)

The final result is obtained after subtracting the last term from the first term in (97), which concludes the proof. \(\square \)

Appendix 2: Proof of Corollary 4

To derive the gradient energy tensor diffusion scheme we set W and D as specified in (59). With this relation between W and D we introduce E and set \(E = 1/|\nabla m(u)|\), then \(W = DE\). Before proceeding with the proof, we specify the components of GET. The components of GET (54) are

$$\begin{aligned} a&= (\partial _x u_x)^2 + (\partial _x u_y)^2 - u_x (\partial _{xx}u_x + \partial _{xy}u_y) \end{aligned}$$
(98a)
$$\begin{aligned} b&= (\partial _{x}u_x)(\partial _x u_y) + (\partial _{y}u_x)(\partial _y u_y) \nonumber \\&\quad - \frac{1}{2}( u_x (\partial _{yx}u_x + \partial _{yy}u_y ) + u_y(\partial _{xx}u_x + \partial _{xy}u_y))\end{aligned}$$
(98b)
$$\begin{aligned} c&= (\partial _y u_y)^2 + (\partial _y u_x)^2 - u_y (\partial _{yx}u_x + \partial _{yy}u_y) . \end{aligned}$$
(98c)

Now we need to compute \(_{\nabla u}{W_{\nabla u}(\nabla u)}\) in (27) with \({\varvec{s}} = \nabla u\), that is, we have

$$\begin{aligned} _{\nabla u}{W_{\nabla u}(\nabla u)} = \begin{pmatrix} \nabla u^{\top } E_{u_x}D \\ \nabla u^{\top } E_{u_y}D \end{pmatrix} + E \begin{pmatrix} \nabla u^{\top } D_{u_x} \\ \nabla u^{\top } D_{u_y} \end{pmatrix} . \end{aligned}$$
(99)

The derivatives of E reads

$$\begin{aligned} E_{u_x}&= \partial _{u_x} \frac{1}{|\nabla u|} = - \frac{1}{|\nabla u|^3} u_x, \end{aligned}$$
(100a)
$$\begin{aligned} E_{u_y}&= \partial _{u_y} \frac{1}{|\nabla u|} = - \frac{1}{|\nabla u|^3} u_y. \end{aligned}$$
(100b)

The tensor \(_{\nabla u}{W_{\nabla u}(\nabla u)}\) in the E–L scheme, can now be expressed as (62). \(\square \)

Appendix 3: Eigendecomposition

This part decomposes D in its eigendecomposition and computes \(D_{u_x}\) and \(D_{u_y}\). We have

$$\begin{aligned} D(\nabla u) = U \varLambda U^{\top } = \begin{pmatrix} v^2_1 &{} v_1v_2 \nonumber \\ v_1v_2 &{} v^2_2 \end{pmatrix}\lambda _1 + \begin{pmatrix} w^2_1 &{} w_1w_2 \\ w_1w_2 &{} w^2_2 \end{pmatrix}\lambda _2, \end{aligned}$$

where \(\lambda _{1,2} = \exp (-|\mu _{1,2}|/k^2)\) and \(\mu _1 = \frac{1}{2}( \mathrm {tr} \left( GET \right) + \alpha )\) and \(\mu _2 = \frac{1}{2}(\mathrm {tr} \left( GET \right) - \alpha )\) are the eigenvalues of GET with \(\alpha = \sqrt{(a-c)^2 + 4b^2}\). We obtain the eigenvectors of GET by solving \(GET \tilde{v} = \mu _1 \tilde{v}\), i.e., we have the following equation system

$$\begin{aligned} \left\{ \begin{array}{l} (a - c - \alpha )\tilde{v}_1 + 2b\tilde{v}_2 = 0 \\ 2b\tilde{v}_1 + (c-a-\alpha )\tilde{v}_2 = 0. \end{array} \right. \end{aligned}$$
(101)

The orthonormal eigenvectors of GET are (67) and (68). Now, we focus on \(\partial _{u_x} D\) in (66). After expanding the derivatives of the eigenvectors we obtain

$$\begin{aligned} \partial _{u_x} \tilde{v}_1&= -(\partial _{yx}u_x + \partial _{yy}u_y) , \end{aligned}$$
(102a)
$$\begin{aligned} \partial _{u_x} \tilde{v}_2&= \partial _{xx}u_x + \partial _{xy}u_y + \partial _{u_x} \alpha , \end{aligned}$$
(102b)
$$\begin{aligned} \partial _{u_x} \tilde{w}_1&= \partial _{u_x} \tilde{v}_1, \end{aligned}$$
(102c)
$$\begin{aligned} \partial _{u_x} \tilde{w}_2&= \partial _{xx}u_x + \partial _{xy}u_y - \partial _{u_x} \alpha , \end{aligned}$$
(102d)

and

$$\begin{aligned} \partial _{u_x} |\tilde{v}|&= |\tilde{v}|^{-1}(\tilde{v}_1(\partial _{u_x}\tilde{v}_1)+\tilde{v}_2(\partial _{u_x}\tilde{v}_2)), \end{aligned}$$
(103a)
$$\begin{aligned} \partial _{u_x} |\tilde{w}|&= |\tilde{w}|^{-1}(\tilde{w}_1(\partial _{u_x}\tilde{w}_1)+\tilde{w}_2(\partial _{u_x}\tilde{w}_2)). \end{aligned}$$
(103b)

The derivatives of the tensor’s eigenvalues are:

$$\begin{aligned} \partial _{u_x} \lambda _{1,2}&= \partial _{u_x} \exp (-|\mu _{1,2}|/k^2) \nonumber \\&= - \frac{\hbox {sgn}(\mu _{1,2})}{k^2}\Big (\partial _{u_x}\mathrm {tr} \left( GET \right) \pm \partial _{u_x}\alpha \Big )\lambda _{1,2}, \end{aligned}$$
(104)

where

$$\begin{aligned} \partial _{u_x}\alpha = \alpha ^{-1} \Big ( \mathrm {tr} \left( GET \right) \partial _{u_x}\mathrm {tr} \left( GET \right) - 2\partial _{u_x} \det (GET) \Big ) \end{aligned}$$
(105)

and

$$\begin{aligned} \partial _{u_x}\mathrm {tr} \left( GET \right)&= -(\partial _{xx}u_x + \partial _{xy}u_y), \end{aligned}$$
(106a)
$$\begin{aligned} \partial _{u_x} \det (GET)&= \partial _{u_x} (ac - b^2) \nonumber \\&= -(\partial _{xx}u_x + \partial _{xy}u_y)c \nonumber \\&\quad + b(\partial _{yx}u_x + \partial _{yy}u_y). \end{aligned}$$
(106b)

In the case \(b=0\) the difference to the above derivation is that \(\det (GET) = ac\), thus \(\partial _{u_x} \alpha \) and \(\partial _{u_x} {\mathrm {det}}(GET)\) should be modified accordingly. The component \(\nabla u^{\top } D_{u_y}\) follows the same line of calculations and is therefore omitted here.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Åström, F., Felsberg, M. & Baravdish, G. Mapping-Based Image Diffusion. J Math Imaging Vis 57, 293–323 (2017). https://doi.org/10.1007/s10851-016-0672-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1007/s10851-016-0672-6

Keywords