US20120106357A1 - Variable step-size least mean square method for estimation in adaptive networks - Google Patents
Variable step-size least mean square method for estimation in adaptive networks Download PDFInfo
- Publication number
- US20120106357A1 US20120106357A1 US12/913,482 US91348210A US2012106357A1 US 20120106357 A1 US20120106357 A1 US 20120106357A1 US 91348210 A US91348210 A US 91348210A US 2012106357 A1 US2012106357 A1 US 2012106357A1
- Authority
- US
- United States
- Prior art keywords
- node
- calculating
- integer
- size
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000013598 vector Substances 0.000 claims description 50
- 238000004422 calculation algorithm Methods 0.000 abstract description 10
- 230000006978 adaptation Effects 0.000 abstract description 4
- 238000009792 diffusion process Methods 0.000 description 20
- 239000011159 matrix material Substances 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/18—Self-organising networks, e.g. ad-hoc networks or sensor networks
Definitions
- the present invention relates generally to sensor networks and, particularly, to wireless sensor networks in which a plurality of wireless sensors are spread over a geographical location. More particularly, the invention relates to estimation in such a network utilizing a variable step-size least mean square method for estimation.
- the term “diffusion” is used to identify the type of cooperation between sensor nodes in the wireless sensor network. That data that is to be shared by any sensor is diffused into the network in order to be captured by its respective neighbors that are involved in cooperation.
- Wireless sensor networks include a plurality of wireless sensors spread over a geographic area.
- the sensors take readings of some specific data and, if they have the capability, perform some signal processing tasks before the data is collected from the sensors for more detailed thorough processing.
- a “fusion-center based” wireless network has sensors transmitting all the data to a fixed center, where all the processing takes place.
- An “ad hoc” network is devoid of such a center and the processing is performed at the sensors themselves, with some cooperation between nearby neighbors of the respective sensor nodes.
- LMS Least mean squares
- FIG. 2 diagrammatically illustrates an adaptive network having N nodes.
- boldface letters are used to represent vectors and matrices and non-bolded letters represent scalar quantities.
- Matrices are represented by capital letters and lower-case letters are used to represent vectors.
- the notation (.) T stands for transposition for vectors and matrices, and expectation operations are denoted as E[.].
- FIG. 2 illustrates an exemplary adaptive network having N nodes, where the network has a predefined topology. For each node k, the number of neighbors is given by N k , including the node k itself, as shown in FIG. 2 . At each iteration i, the output of the system at each node is given by:
- u k,i is a known regressor row vector of length M
- w 0 is an unknown column vector of length M
- ⁇ k (i) represents noise.
- the output and regressor data are used to produce an estimate of the unknown vector, given by ⁇ k,i .
- the adaptation can be performed using two different techniques.
- the first technique is the Incremental Least Mean Squares (ILMS) method, in which each node updates its own estimate at every iteration and then passes on its estimate to the next node. The estimate of the last node is taken as the final estimate of that iteration.
- the second technique is the Diffusion LMS (DLMS), where each node combines its own estimate with the estimates of its neighbors using some combination technique, and then the combined estimate is used for updating the node estimate.
- This method is referred to as Combine-then-Adapt (CTA) diffusion.
- CTA Combine-then-Adapt
- ATC Adapt-then-Combine
- ⁇ c lk ⁇ l ⁇ N k is a combination weight for node k, which is fixed
- ⁇ f l,i ⁇ l ⁇ N k is the local estimate for each node neighboring node k
- ⁇ k is the node step-size.
- the conventional Diffusion Lease Mean Square (LMS) technique uses a fixed step-size, which is chosen as a trade-off between steady-state misadjustment and speed of convergence. A fast convergence, as well as low steady-state misadjustment, cannot be achieved with this technique.
- LMS Diffusion Lease Mean Square
- variable step-size least mean square method for estimation in adaptive networks uses a variable step-size to provide estimation for each node in the adaptive network, where the step-size at each node is determined by the error calculated for each node. This is in contrast to conventional least mean square algorithms used in adaptive filters and the like, in which the choice of step-size reflects a tradeoff between misadjustment and the speed of adaptation.
- l is an integer
- c lk represents a combination weight for node k, which is fixed
- FIG. 1 is a block diagram of a system for implementing a variable step-size least mean square method for estimation in adaptive networks according to the present invention.
- FIG. 2 is a diagram schematically illustrating an adaptive network with N nodes.
- FIG. 3 is a diagram schematically illustrating network topology for an exemplary simulation of the variable step-size least mean square method for estimation in adaptive networks according to the present invention.
- FIG. 4A is a graph of signal-to-noise ratio as chosen for simulation at each node in the exemplary simulation of FIG. 3 .
- FIG. 4B is a graph of simulated values of noise power at each node in the exemplary simulation of FIG. 3 .
- FIGS. 5A and 5B are comparison plots illustrating mean square deviation as a function of time for two exemplary scenarios using the variable step-size least mean square method for estimation in adaptive networks according to the present invention.
- FIGS. 6A and 6B are comparison plots illustrating steady-state mean square deviations at each node for the exemplary scenarios of FIGS. 5A and 5B , respectively.
- variable step-size least mean square method for estimation in adaptive networks uses a variable step-size to provide estimation for each node in the adaptive network, where the step-size at each node is determined by the error calculated for each node.
- This is in contrast to conventional least mean square algorithms used in adaptive filters and the like, in which the choice of step-size reflects a tradeoff between misadjustment and the speed of adaptation.
- An example of a conventional LMS algorithm is shown in chapter ten of A. H. Sayed, Adaptive Filters , New York: Wiley, 2008, pgs. 163-166.
- An example of a variable step-size LMS algorithm is shown in R. H. Kwong and E. W. Johnston, “A Variable Step Size LMS Algorithm”, IEEE Transactions on Signal Processing , Vol. 40, No. 7, July 1992, each of which is herein incorporated by reference in its entirety.
- Every node updates its estimate based on the updated estimate coming from the previous node.
- the error is given by:
- the node step size ⁇ k is given by:
- y k,i y k-1,i + ⁇ k ( i ) u k,i T ( d k ( i ) ⁇ u k,i y k-1,i ). (6)
- a second, diffusion variable step-size least mean square method of estimation is used.
- the diffusion variable step-size least mean square method of estimation is based upon equation set (2), and the error is given by:
- l is an integer
- c lk represents a combination weight for node k, which is fixed
- Equation (9) shows how the variance of the Gaussian normalized error vector y k iterates from node to node.
- ⁇ _ k ⁇ ⁇ ( M k + ⁇ v , k 2 ) 1 - ⁇ ( 14 )
- ⁇ _ k 2 2 ⁇ ⁇ ⁇ ⁇ _ ⁇ ( ⁇ v , k 2 + M k ) + 3 ⁇ ⁇ 2 ⁇ ( ⁇ v , k 2 + M k ) 2 1 - ⁇ 2 ( 15 )
- M is the steady-state misadjustment for the step-size, given by:
- M k 1 - [ 1 - 2 ⁇ ( 3 - ⁇ ) ⁇ ⁇ v , k 2 1 - ⁇ 2 ⁇ Tr ⁇ ( L ) ] 1 + [ 1 - 2 ⁇ ( 3 - ⁇ ) ⁇ ⁇ v , k 2 1 - ⁇ 2 ⁇ Tr ⁇ ( L ) ] . ( 16 )
- R ⁇ is the noise auto-correlation matrix
- bvec ⁇ . ⁇ is the block vectorization operator
- D is the block diagonal step-size matrix for the whole network
- G is the block combiner matrix
- A is the block vectorized form of the fourth order weighted moment of the regressor vectors
- e is the block Kronecker product.
- step-size matrix is block-diagonal, the above operations are relatively straightforward.
- Steady-state matrices may be formed for the step-sizes using equations (14) and (15). Using these matrices directly in the steady-state equations provides the MSD and EMSE values for the variable step-size diffusion least mean square method.
- FIG. 5A illustrates the results obtained for the first scenario.
- FIG. 5A there is a great improvement in performance obtained through the usage of both incremental and diffusion variable step-size LMS methods.
- the performance of the variable step-size diffusion LMS method deteriorates only by approximately 3 dB, whereas that of the variable step-size incremental LMS method deteriorates by nearly 9 dB, as shown in FIG. 5B .
- FIGS. 6A and 6B illustrate the steady-state MSD values at each node for both the incremental and diffusion methods, with the variable step-size diffusion LMS method being shown to be superior to both the incremental method and the fixed step-size conventional LMS method.
- FIG. 1 illustrates a generalized system 10 for implementing the variable step-size least mean square method for estimation in adaptive networks, although it should be understood that the generalized system 10 may represent a stand-alone computer, computer terminal, portable computing device, networked computer or computer terminal, or networked portable device.
- Data may be entered into the system 10 by the user via any suitable type of user interface 18 , and may be stored in computer readable memory 14 , which may be any suitable type of computer readable and programmable memory.
- Calculations are performed by the processor 12 , which may be any suitable type of computer processor, and may be displayed to the user on the display 16 , which may be any suitable type of computer display.
- the system 10 preferably includes a network interface 20 , such as a modem or the like, allowing the computer to be networked with either a local area network or a wide area network.
- the processor 12 may be associated with, or incorporated into, any suitable type of computing device, for example, a personal computer or a programmable logic controller.
- the display 16 , the processor 12 , the memory 14 , the user interface 18 , network interface 20 and any associated computer readable media are in communication with one another by any suitable type of data bus, as is well known in the art. Additionally, other standard components, such as a printer or the like, may interface with system 10 via any suitable type of interface.
- Examples of computer readable media include a magnetic recording apparatus, an optical disk, a magneto-optical disk, and/or a semiconductor memory (for example, RAM, ROM, etc.).
- Examples of magnetic recording apparatus that may be used in addition to memory 14 , or in place of memory 14 , include a hard disk device (HDD), a flexible disk (FD), and a magnetic tape (MT).
- Examples of the optical disk include a DVD (Digital Versatile Disc), a DVD-RAM, a CD-ROM (Compact Disc-Read Only Memory), and a CD-R (Recordable)/RW.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
- 1. Field of the Invention
- The present invention relates generally to sensor networks and, particularly, to wireless sensor networks in which a plurality of wireless sensors are spread over a geographical location. More particularly, the invention relates to estimation in such a network utilizing a variable step-size least mean square method for estimation.
- 2. Description of the Related Art
- In reference to wireless sensor networks, the term “diffusion” is used to identify the type of cooperation between sensor nodes in the wireless sensor network. That data that is to be shared by any sensor is diffused into the network in order to be captured by its respective neighbors that are involved in cooperation.
- Wireless sensor networks include a plurality of wireless sensors spread over a geographic area. The sensors take readings of some specific data and, if they have the capability, perform some signal processing tasks before the data is collected from the sensors for more detailed thorough processing.
- A “fusion-center based” wireless network has sensors transmitting all the data to a fixed center, where all the processing takes place. An “ad hoc” network is devoid of such a center and the processing is performed at the sensors themselves, with some cooperation between nearby neighbors of the respective sensor nodes.
- Recently, several algorithms have been developed to exploit this nature of the sensor nodes and cooperation schemes have been formalized to improve estimation in sensor networks.
- Least mean squares (LMS) algorithms are a class of adaptive filters used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (i.e., the difference between the desired and the actual signal). The LMS algorithm is a stochastic gradient descent method, in that the filter is only adapted based on the error at the current time.
-
FIG. 2 diagrammatically illustrates an adaptive network having N nodes. In the following, boldface letters are used to represent vectors and matrices and non-bolded letters represent scalar quantities. Matrices are represented by capital letters and lower-case letters are used to represent vectors. The notation (.)T stands for transposition for vectors and matrices, and expectation operations are denoted as E[.].FIG. 2 illustrates an exemplary adaptive network having N nodes, where the network has a predefined topology. For each node k, the number of neighbors is given by Nk, including the node k itself, as shown inFIG. 2 . At each iteration i, the output of the system at each node is given by: -
d k(i)=u k,i w 0+νk(i), (1) - where uk,i is a known regressor row vector of length M, w0 is an unknown column vector of length M, and νk(i) represents noise. The output and regressor data are used to produce an estimate of the unknown vector, given by ψk,i.
- The adaptation can be performed using two different techniques. The first technique is the Incremental Least Mean Squares (ILMS) method, in which each node updates its own estimate at every iteration and then passes on its estimate to the next node. The estimate of the last node is taken as the final estimate of that iteration. The second technique is the Diffusion LMS (DLMS), where each node combines its own estimate with the estimates of its neighbors using some combination technique, and then the combined estimate is used for updating the node estimate. This method is referred to as Combine-then-Adapt (CTA) diffusion. It is also possible to first update the estimate using the estimate from the previous iteration and then combine the updates from all neighbors to form the final estimate for the iteration. This method is known as Adapt-then-Combine (ATC) diffusion. Simulation results show that ATC diffusion outperforms CTA diffusion.
- Using LMS, the ATC diffusion algorithm is given by:
-
- where {clk}lεN
k is a combination weight for node k, which is fixed, and {fl,i}lεNk is the local estimate for each node neighboring node k, and μk is the node step-size. - The conventional Diffusion Lease Mean Square (LMS) technique uses a fixed step-size, which is chosen as a trade-off between steady-state misadjustment and speed of convergence. A fast convergence, as well as low steady-state misadjustment, cannot be achieved with this technique.
- Thus, a variable step-size least mean square method for estimation in adaptive networks solving the aforementioned problems is desired.
- The variable step-size least mean square method for estimation in adaptive networks uses a variable step-size to provide estimation for each node in the adaptive network, where the step-size at each node is determined by the error calculated for each node. This is in contrast to conventional least mean square algorithms used in adaptive filters and the like, in which the choice of step-size reflects a tradeoff between misadjustment and the speed of adaptation.
- In a first embodiment, a variable step-size incremental least mean square method for estimation in adaptive networks is given by the following steps: (a) establishing a network having N nodes, where N is an integer greater than one, and establishing a Hamiltonian cycle among the nodes such that each node is connected to two neighboring nodes, one from which it receives data and one to which it transmits data; (b) establishing an integer i and initially setting i=1; (c) calculating an output of the adaptive network at each node k as dk(i)=uk,iw0+νk(i), where uk,i represents a known regressor row vector of length M, w0 represents an unknown column vector of length M and νk(i) represents noise in the adaptive network, where M is an integer; (d) calculating an error value ek(i) at each node k as ek(i)=dk(i)−uk,iyk-1,i, where yk,i represents an estimate of an output vector for each node k at iteration {dot over (r)}, (e) calculating a node step size μk for each node k as μk(i)=αμk(i−1)+γe2(i), where α and γ are unitless, selectable parameters; (f) calculating an estimate of an output vector yk,i for each node k as yk,i=yk-1,i+μk(i)uTk,i T(dk(i)−uk,iyk-1,i); (g) if ek(i) is greater than a selected error threshold, then setting i=i+1 and returning to step (c); otherwise, (h) defining a set of output vectors yk for each node k, where yk=yk,i; and (i) storing the set of output vectors in computer readable memory.
- In an alternative embodiment, a variable step-size diffusion least mean square method for estimation in adaptive networks is given by the following steps: (a) establishing an adaptive network having N nodes, where N is an integer greater than one, and for each node k, a number of neighbors of node k is given by Nk, including the node k, where k is an integer between one and N; (b) establishing an integer i and initially setting i=1; (c) calculating an output of the adaptive network at each node k as dk(i)=uk,iw0+νk(i), where uk,i represents a known regressor row vector of length M, w0 represents an unknown column vector of length M and νk(i) represents noise in the adaptive network, where M is an integer; (d) calculating an error value ek(i) at each node k as ek(i)=dk(i)−uk,iyk,i-1, where yk,i represents an estimate of an output vector for each node k at iteration {dot over (r)}, (e) calculating a node step size μk for each node k as μk(i)=αμk(i−1+γe2(i), where α and γ are unitless, selectable parameters; (f) calculating a local estimate for each node neighboring node k, fk,i, for each node k as fk,i=yk,i-1+μkuk,i T(dk(i)−uk,iyk,i-1); (g) calculating an estimate of an output vector yk,i for each node k as
-
- where l is an integer, clk represents a combination weight for node k, which is fixed; (h) if ek(i) is greater than a selected error threshold, then setting i=i+1 and returning to step (c); otherwise, (i) defining a set of output vectors yk for each node k, where yk=yk,i; and (j) storing the set of output vectors in computer readable memory.
- These and other features of the present invention will become readily apparent upon further review of the following specification.
-
FIG. 1 is a block diagram of a system for implementing a variable step-size least mean square method for estimation in adaptive networks according to the present invention. -
FIG. 2 is a diagram schematically illustrating an adaptive network with N nodes. -
FIG. 3 is a diagram schematically illustrating network topology for an exemplary simulation of the variable step-size least mean square method for estimation in adaptive networks according to the present invention. -
FIG. 4A is a graph of signal-to-noise ratio as chosen for simulation at each node in the exemplary simulation ofFIG. 3 . -
FIG. 4B is a graph of simulated values of noise power at each node in the exemplary simulation ofFIG. 3 . -
FIGS. 5A and 5B are comparison plots illustrating mean square deviation as a function of time for two exemplary scenarios using the variable step-size least mean square method for estimation in adaptive networks according to the present invention. -
FIGS. 6A and 6B are comparison plots illustrating steady-state mean square deviations at each node for the exemplary scenarios ofFIGS. 5A and 5B , respectively. - Similar reference characters denote corresponding features consistently throughout the attached drawings.
- The variable step-size least mean square method for estimation in adaptive networks uses a variable step-size to provide estimation for each node in the adaptive network, where the step-size at each node is determined by the error calculated for each node. This is in contrast to conventional least mean square algorithms used in adaptive filters and the like, in which the choice of step-size reflects a tradeoff between misadjustment and the speed of adaptation. An example of a conventional LMS algorithm is shown in chapter ten of A. H. Sayed, Adaptive Filters, New York: Wiley, 2008, pgs. 163-166. An example of a variable step-size LMS algorithm is shown in R. H. Kwong and E. W. Johnston, “A Variable Step Size LMS Algorithm”, IEEE Transactions on Signal Processing, Vol. 40, No. 7, July 1992, each of which is herein incorporated by reference in its entirety.
- In a first, incremental variable step-size least mean square method, the estimate of the unknown vector yk,i is given by:
-
y k,i =y k-1,i+μk(i)u k,i T(d k(i)−u k,i y k-1,i). (3) - Every node updates its estimate based on the updated estimate coming from the previous node. The error is given by:
-
e k(i)=d k(i)−u k,i y k-1,i. (4) - The node step size μk is given by:
-
μk(i)=αμk(i−1)+γe 2(i), (5) - where α and γ are controlling parameters. Updating equation (3) with the step-size given by equation (5) results in the variable step-size incremental least mean square method defined by the following recursion:
-
y k,i =y k-1,i+μk(i)u k,i T(d k(i)−u k,i y k-1,i). (6) - Thus, the variable step-size incremental least mean square method is given by the following steps: (a) establishing an adaptive network having N nodes, where N is an integer greater than one, and for each node k, a number of neighbors of node k is given by Nk, including the node k, where k is an integer between one and N, and establishing a Hamiltonian cycle among the nodes such that each node is connected to two neighboring nodes, each node receiving data from one of the neighboring nodes and transmitting data to the other neighboring node; (b) establishing an integer i and initially setting i=1; (c) calculating an output of the adaptive network at each node k as dk(i)=uk,iw0+νk(i), where uk,i represents a known regressor row vector of length M, w0 represents an unknown column vector of length M and νk(i) represents noise in the adaptive network, where M is an integer; (d) calculating an error value ek(i) at each node k as ek(i)=dk(i)−uk,iyk-1,i; (e) calculating a node step size for each node k as μk(i)=αμk(i−1)+γe2 (i); (f) calculating an estimate of an output vector yk,i for each node k as yk,i=yk-1,i+μk(i)uk,i T(dk(i)−uk,iyk-1,i); (g) if ek(i) is greater than a selected error threshold, then setting i=i+1 and returning to step (c); otherwise, (h) defining a set of output vectors yk for each node k, where yk=yk,i; and (i) storing the set of output vectors in computer readable memory.
- In an alternative embodiment, a second, diffusion variable step-size least mean square method of estimation is used. The diffusion variable step-size least mean square method of estimation is based upon equation set (2), and the error is given by:
-
e k(i)=d k(i)−u k,i y k,i-1. (7) - Applying this error to equation (5) yields the following update equations:
-
- Thus, the variable step-size diffusion least mean square method is given by the following steps: (a) establishing an adaptive network having N nodes, where N is an integer greater than one, and for each node k, a number of neighbors of node k is given by Nk, including the node k, where k is an integer between one and N; (b) establishing an integer i and initially setting i=1; (c) calculating an output of the adaptive network at each node k as dk(i)=uk,iw0+νk(i), where uk,i represents a known regressor row vector of length M, w0 represents an unknown column vector of length M and νk(i) represents noise in the adaptive network, where M is an integer; (d) calculating an error value ek(i) at each node k as ek(i)=dk(i)−uk,iyk,i-1; (e) calculating a node step size for each node k as μk(i)=αμk(i−1)+γe2(i); (f) calculating a local estimate for each node neighboring node k, fk,i, for each node k as fk,i=yk,i-1+μkuk,i T(dk(i)−uk,iyk,i-1); (g) calculating an estimate of an output vector yk,i for each node k as
-
- where l is an integer, clk represents a combination weight for node k, which is fixed; (h) if ek(i) is greater than a selected error threshold, then setting i=i+1 and returning to step (c); otherwise, (i) defining a set of output vectors yk for each node k, where yk=yk,i; and (j) storing the set of output vectors in computer readable memory.
- In order to perform a performance analysis of the variable step-size incremental least mean square method, the variance relation is given by:
-
E└∥y k∥S 2 ┘=E└∥y k-1∥S ′ 2┘+μk 2σν,k 2 Tr(L kS ) (9) -
S ′=S −μ k(L kS +S L k)+μk 2(L k Tr(S L k)+2L kS L k) (10) - where
S is a Gaussian normalized weighting matrix, Lk is a diagonal matrix containing the eigenvalues for the regressor vector at node k, and σν,k 2 is the noise variance. Equations (9) and (10) show how the variance of the Gaussian normalized error vectory k iterates from node to node. When equation (5) is incorporated into the results, an independence assumption is invoked, resulting in the following: -
E└μ k(i)u k,i T e k(i)┘=E[μ k(i)]E└u k,i T e k(i)┘, (11) - thus yielding the following new recursions, respectively, from equations (9) and (10):
-
E[∥y k i∥S 2 ]=E[∥y k-1 i∥S ′ 2 ]+E[μ k,i-1 2]σν,k 2 Tr(L kS ) (12) -
S ′=S −E[μ k,i-1](L kS +S L k)+E└μ k,i-1 2┘(L k Tr(S L k)+2L kS L k). (13) - At steady state, the step-size expectation values in equations (12) and (13),
μ =Eμk,∞ andμ 2=Eμk,∞ 2, respectively, are given by: -
- where M is the steady-state misadjustment for the step-size, given by:
-
- The above can be directly incorporated into steady-state equations for calculation of mean square deviation (MSD) and excess mean square error (EMSE) in order to obtain the values of MSD and EMSE. Similarly, the variance relation for the variable step-size diffusion least mean square method is given by:
-
E[∥y i∥S 2 ]=E[∥y i-1∥F S 2 ]+b Ts (17) -
F =(G T eG *)[I N2 M2 −(I NM eLD)−(LDeI NM)+(DeD)A], (18) -
where -
s =bvec{S }, and (19) -
b=bvec{R ν D 2 L}, (20) - where Rν is the noise auto-correlation matrix, bvec{.} is the block vectorization operator, D is the block diagonal step-size matrix for the whole network,
G is the block combiner matrix, A is the block vectorized form of the fourth order weighted moment of the regressor vectors, and e is the block Kronecker product. Applying equation (11), equations (18) and (20) respectively yield: -
F =(G T eG *)[I N2 M2 −(I NM eLE[D])−(LE[D]eI NM)+E[(DeD)A]] (21) -
b=bvec{R ν E└D 2 ┘L}. (22) - Since the step-size matrix is block-diagonal, the above operations are relatively straightforward. Steady-state matrices may be formed for the step-sizes using equations (14) and (15). Using these matrices directly in the steady-state equations provides the MSD and EMSE values for the variable step-size diffusion least mean square method.
- In the below, simulation examples are described to illustrate the performance of the variable step-size LMS methods over adaptive networks. Results are compared with conventional fixed-step LMS methods. A network topology with N=15 nodes is considered in
FIG. 3 . For the variable step-size incremental LMS method, α is set to 0.997 and γ is set to 2×10−4. For the variable step-size diffusion LMS method, α is set to 0.998 and γ is set to 2×10−5. These exemplary values ensure that the convergence rate is the same for all methods. Two distinct scenarios are presented in the below results. In the first scenario, the signal-to-noise (SNR) and the noise power profile are shown inFIGS. 4A and 4B , respectively. In the second scenario, the noise power forNode 5 is increased from 6×10−4 to 1×10−2, which reduces the SNR to approximately 18 dB. As a result, there is a visible deterioration in performance. -
FIG. 5A illustrates the results obtained for the first scenario. As can be seen inFIG. 5A , there is a great improvement in performance obtained through the usage of both incremental and diffusion variable step-size LMS methods. There is an approximately 25 dB difference in favor of the variable step-size diffusion LMS method, compared to the conventional fixed step-size method. In the second scenario, the performance of the variable step-size diffusion LMS method deteriorates only by approximately 3 dB, whereas that of the variable step-size incremental LMS method deteriorates by nearly 9 dB, as shown inFIG. 5B . -
FIGS. 6A and 6B illustrate the steady-state MSD values at each node for both the incremental and diffusion methods, with the variable step-size diffusion LMS method being shown to be superior to both the incremental method and the fixed step-size conventional LMS method. -
FIG. 1 illustrates ageneralized system 10 for implementing the variable step-size least mean square method for estimation in adaptive networks, although it should be understood that thegeneralized system 10 may represent a stand-alone computer, computer terminal, portable computing device, networked computer or computer terminal, or networked portable device. Data may be entered into thesystem 10 by the user via any suitable type ofuser interface 18, and may be stored in computerreadable memory 14, which may be any suitable type of computer readable and programmable memory. Calculations are performed by theprocessor 12, which may be any suitable type of computer processor, and may be displayed to the user on thedisplay 16, which may be any suitable type of computer display. Thesystem 10 preferably includes anetwork interface 20, such as a modem or the like, allowing the computer to be networked with either a local area network or a wide area network. - The
processor 12 may be associated with, or incorporated into, any suitable type of computing device, for example, a personal computer or a programmable logic controller. Thedisplay 16, theprocessor 12, thememory 14, theuser interface 18,network interface 20 and any associated computer readable media are in communication with one another by any suitable type of data bus, as is well known in the art. Additionally, other standard components, such as a printer or the like, may interface withsystem 10 via any suitable type of interface. - Examples of computer readable media include a magnetic recording apparatus, an optical disk, a magneto-optical disk, and/or a semiconductor memory (for example, RAM, ROM, etc.). Examples of magnetic recording apparatus that may be used in addition to
memory 14, or in place ofmemory 14, include a hard disk device (HDD), a flexible disk (FD), and a magnetic tape (MT). Examples of the optical disk include a DVD (Digital Versatile Disc), a DVD-RAM, a CD-ROM (Compact Disc-Read Only Memory), and a CD-R (Recordable)/RW. - It is to be understood that the present invention is not limited to the embodiments described above, but encompasses any and all embodiments within the scope of the following claims.
Claims (3)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/913,482 US8547854B2 (en) | 2010-10-27 | 2010-10-27 | Variable step-size least mean square method for estimation in adaptive networks |
US13/286,172 US8903685B2 (en) | 2010-10-27 | 2011-10-31 | Variable step-size least mean square method for estimation in adaptive networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/913,482 US8547854B2 (en) | 2010-10-27 | 2010-10-27 | Variable step-size least mean square method for estimation in adaptive networks |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/286,172 Continuation-In-Part US8903685B2 (en) | 2010-10-27 | 2011-10-31 | Variable step-size least mean square method for estimation in adaptive networks |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120106357A1 true US20120106357A1 (en) | 2012-05-03 |
US8547854B2 US8547854B2 (en) | 2013-10-01 |
Family
ID=45996676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/913,482 Expired - Fee Related US8547854B2 (en) | 2010-10-27 | 2010-10-27 | Variable step-size least mean square method for estimation in adaptive networks |
Country Status (1)
Country | Link |
---|---|
US (1) | US8547854B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120135691A1 (en) * | 2010-11-29 | 2012-05-31 | King Fahd University Of Petroleum And Minerals | Noise-constrained diffusion least mean square method for estimation in adaptive networks |
CN105871762A (en) * | 2016-05-23 | 2016-08-17 | 苏州大学 | Adaptive network used for estimation of sparse parameter vector |
US20160249158A1 (en) * | 2015-02-19 | 2016-08-25 | Xerox Corporation | System and method for flexibly pairing devices using adaptive variable thresholding |
US9837991B2 (en) | 2013-04-10 | 2017-12-05 | King Fahd University Of Petroleum And Minerals | Adaptive filter for system identification |
CN112039498A (en) * | 2020-08-27 | 2020-12-04 | 重庆邮电大学 | Adaptive Signal Processing Method and Medium Based on Polymorphic Variable Step Least Mean Square |
US11450335B2 (en) * | 2018-03-09 | 2022-09-20 | Datang Mobile Communications Equipment Co., Ltd. | Method and device for updating coefficient vector of finite impulse response filter |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150074161A1 (en) * | 2013-09-09 | 2015-03-12 | King Fahd University Of Petroleum And Minerals | Least mean square method for estimation in sparse adaptive networks |
US10310518B2 (en) * | 2015-09-09 | 2019-06-04 | Apium Inc. | Swarm autopilot |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120109600A1 (en) * | 2010-10-27 | 2012-05-03 | King Fahd University Of Petroleum And Minerals | Variable step-size least mean square method for estimation in adaptive networks |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6026121A (en) | 1997-07-25 | 2000-02-15 | At&T Corp | Adaptive per-survivor processor |
US6377611B1 (en) | 1999-02-01 | 2002-04-23 | Industrial Technology Research Institute | Apparatus and method for a DSSS/CDMA receiver |
US20020041678A1 (en) | 2000-08-18 | 2002-04-11 | Filiz Basburg-Ertem | Method and apparatus for integrated echo cancellation and noise reduction for fixed subscriber terminals |
ATE397300T1 (en) | 2000-08-25 | 2008-06-15 | Sanyo Electric Co | ADAPTIVE GROUP ARRANGEMENT, ADAPTIVE GROUP METHOD AND PROGRAM |
US6741707B2 (en) | 2001-06-22 | 2004-05-25 | Trustees Of Dartmouth College | Method for tuning an adaptive leaky LMS filter |
KR100511879B1 (en) | 2003-06-26 | 2005-09-02 | 연세대학교 산학협력단 | Method and Device for Channel Equalization |
US20050201457A1 (en) | 2004-03-10 | 2005-09-15 | Allred Daniel J. | Distributed arithmetic adaptive filter and method |
US7986931B2 (en) | 2006-12-12 | 2011-07-26 | Industrial Technology Research Institute | RFID reader and circuit and method for echo cancellation thereof |
US8014517B2 (en) | 2007-04-18 | 2011-09-06 | Gas Technology Institute | Method and apparatus for enhanced convergence of the normalized LMS algorithm |
CN101252559A (en) | 2007-10-12 | 2008-08-27 | 电子科技大学中山学院 | Training sequence time-varying step length least mean square method |
-
2010
- 2010-10-27 US US12/913,482 patent/US8547854B2/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120109600A1 (en) * | 2010-10-27 | 2012-05-03 | King Fahd University Of Petroleum And Minerals | Variable step-size least mean square method for estimation in adaptive networks |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120135691A1 (en) * | 2010-11-29 | 2012-05-31 | King Fahd University Of Petroleum And Minerals | Noise-constrained diffusion least mean square method for estimation in adaptive networks |
US8462892B2 (en) * | 2010-11-29 | 2013-06-11 | King Fahd University Of Petroleum And Minerals | Noise-constrained diffusion least mean square method for estimation in adaptive networks |
US9837991B2 (en) | 2013-04-10 | 2017-12-05 | King Fahd University Of Petroleum And Minerals | Adaptive filter for system identification |
US20160249158A1 (en) * | 2015-02-19 | 2016-08-25 | Xerox Corporation | System and method for flexibly pairing devices using adaptive variable thresholding |
US9654904B2 (en) * | 2015-02-19 | 2017-05-16 | Xerox Corporation | System and method for flexibly pairing devices using adaptive variable thresholding |
CN105871762A (en) * | 2016-05-23 | 2016-08-17 | 苏州大学 | Adaptive network used for estimation of sparse parameter vector |
CN105871762B (en) * | 2016-05-23 | 2018-10-12 | 苏州大学 | A kind of adaptive network for the estimation of Sparse parameter vector |
US11450335B2 (en) * | 2018-03-09 | 2022-09-20 | Datang Mobile Communications Equipment Co., Ltd. | Method and device for updating coefficient vector of finite impulse response filter |
CN112039498A (en) * | 2020-08-27 | 2020-12-04 | 重庆邮电大学 | Adaptive Signal Processing Method and Medium Based on Polymorphic Variable Step Least Mean Square |
Also Published As
Publication number | Publication date |
---|---|
US8547854B2 (en) | 2013-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8903685B2 (en) | Variable step-size least mean square method for estimation in adaptive networks | |
US8547854B2 (en) | Variable step-size least mean square method for estimation in adaptive networks | |
US8462892B2 (en) | Noise-constrained diffusion least mean square method for estimation in adaptive networks | |
US8244787B2 (en) | Optimum nonlinear correntropy filter | |
Bin Saeed et al. | A variable step-size strategy for distributed estimation over adaptive networks | |
US20130246006A1 (en) | Method for kalman filter state estimation in bilinear systems | |
CN113726301A (en) | Method and equipment for regulating and controlling dynamic gain of optical fiber Raman amplifier | |
US9622265B2 (en) | System and method for a CSMA-CA half window scheme | |
Roonizi | A new approach to ARMAX signals smoothing: Application to variable-Q ARMA filter design | |
US20130110478A1 (en) | Apparatus and method for blind block recursive estimation in adaptive networks | |
CN112153564B (en) | Efficient multi-hop positioning method based on combination of centralized and distributed computing | |
CN105871762B (en) | A kind of adaptive network for the estimation of Sparse parameter vector | |
Abdolee et al. | Tracking performance and optimal adaptation step-sizes of diffusion-LMS networks | |
CN114449584B (en) | Distributed computing unloading method and device based on deep reinforcement learning | |
Aduroja et al. | Distributed principal components analysis in sensor networks | |
US20150074161A1 (en) | Least mean square method for estimation in sparse adaptive networks | |
CN114938232A (en) | LSTM-based simultaneous co-frequency full-duplex digital domain self-interference suppression method | |
US7424546B1 (en) | Rate-based proportional-integral control scheme for active queue management | |
Dias et al. | Cooperative particle filtering for emitter tracking with unknown noise variance | |
JP2009111988A (en) | Transceiver apparatus for communicating with communication partner in repetitive radio frames | |
CN106950537A (en) | A kind of Distributed localization method based on UWB | |
Wang et al. | Knowledge distillation based cooperative reinforcement learning for connectivity preservation in uav networks | |
JP2000242624A (en) | Signal separation device | |
CN118367967A (en) | Beam forming method, device, equipment and computer readable storage medium | |
US11270720B2 (en) | Background noise estimation and voice activity detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KING FAHD UNIVERSITY OF PETROLEUM AND MINERALS, SA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZUMMO, SALAM A., DR.;BIN SAEED, MUHAMMAD OMER, MR.;ZERGUINE, AZZEDINE, DR.;REEL/FRAME:025206/0021 Effective date: 20101016 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20211001 |