Contents:
With the exception of [ 26 ], none of the approaches described above are decentralized. However, as we have seen, the authors of [ 26 ] include several events for regions that are not directly relevant to surface networks. Further, most of the approaches above rely on geometric information about the surface and, so, are ultimately reliant on access to coordinate information about the location of critical points and their associated regions. This paper does, however, substantially revise and extend our previous work in [ 7 ].
Our previous work defines and evaluates a decentralized and coordinate-free algorithm to identify critical points and surface networks in a static field. Based on the extended definitions of discrete surface networks, this paper not only defines basic spatial events occurring on surface networks, but also provides a decentralized algorithm to detect these events in a dynamic field. Based on our review of the existing literature relevant to events on surface networks, we will now proceed with the design of an algorithm capable of detecting our four primitive surface network events: The algorithm is amenable to decentralized computation and capable of operating within the constraints of the limited spatial granularity of a sensor network.
Decentralized Spatial Computing. Foundations of Geosensor Networks. Authors: Duckham, Matt. Key question addressed by author is: “What's special about. Decentralized Spatial Computing: Foundations of Geosensor Networks by Duckham, Matt () Paperback on www.farmersmarketmusic.com *FREE* shipping on qualifying.
The formal model of a geosensor network in this paper follows the approach of [ 27 ]. The set of neighboring nodes is represented as a function nbr: Each node has a unique identity, modeled as a function id: Each node also has the ability to sense its changing environment, modeled using the sense function, sense: Note that although we allow time-varying sensed data, we assume that the structure of the communication graph and location of the nodes are static. Using this foundational model of a geosensor network, the algorithm definitions in subsequent sections follow the decentralized algorithms design and specification style of [ 27 , 28 ].
In brief, there are four key components of decentralized algorithms: Restrictions concern the assumptions made about the environments in which an algorithm will operate. For example, there are no restrictions in terms of the structure of the communication networks in our algorithm. We do not require spatial information, such as coordinate information. However, we assume our sensor networks are static and the communications are reliable see Algorithm 1, Line 1. System events define the external stimuli that nodes can respond to, such as receiving a message from another node or sensing a change to a monitored environmental variable see Algorithm 1, Line 4 or 7.
When a system event occurs, a node will react by initiating an atomic, terminating sequence of operations, called an action see Algorithm 1, Line 6. System states allow nodes to respond to the same events with different actions based on the effects of previous system events and actions see Algorithm 1, Line 3 or 6. This section explains the definitions of critical points for the finite spatial granularity of the discrete point data that are generated by a geosensor network.
These definitions will be used in the following Section 4 for the algorithm explanation. The ascent descent vector of a node is defined as the unique directed edge from that node to its one-hop neighbor with the highest lowest sensed value of all neighbors. For example, a weak peak in Figure 1 has communication links with four neighboring nodes.
Identification of a strong and a weak peak, ascent vectors and an ascent bridge Contour lines describe a scalar field showing the difference in elevation between consecutive contour lines. The sensed values can be estimated using a contour map. In addition, a weak peak can be connected to a strong peak via an ascent bridge. An ascent bridge is an edge from a weak peak v to a neighboring node whose ascent vector points away from v or symmetrically for a weak pit, strong pit and descent bridge.
These basic structures are illustrated in Figure 1. Using this information, it is then possible to design decentralized algorithms to identify for each node the strong peak and pit associated with that node i. Examples of pass-edges are shown in Figure 2. Two thick black lines are pass-edges for which associated peaks and pits are different i. Representative pass Contour lines describe a scalar field showing the difference in elevation between consecutive contour lines. Multiple pass-edges were grouped together as a pass-cluster.
Moving to a dynamic scenario, however, it becomes highly inefficient to continually monitor events occurring on a group of pass-edges. Therefore, in this paper, we add a further definition of the representative pass amongst a pass-cluster. For example, let R be the set of pass edges that connect two specified pairs of peaks and pits in the surface network. Focusing on a representative pass, rather than a potentially large set of pass-edges, it becomes easier and more efficient to monitor events occurring on passes.
It is possible that two representative passes occur as one-hop neighbors, akin to a monkey saddle in a continuous surface. In practice, such monkey saddles do occur in our sensor networks, but due to network granularity effects, rather than being a true reflection of the topography of the underlying surface. In other words, monkey saddles typically occur as a result of adverse network connectivity leading to certain spatially nearby nodes not being one-hop network neighbors.
Thus, in our algorithms, we also include procedures for coordination amongst neighboring representative passes to account for such granularity effects. In addition to a monkey saddle, it is important to note that degenerate critical points can occur due to the same values. Such a plateau is mainly generated by the discrete quantization while extracting surface networks. The authors in [ 4 , 29 ] deal with degenerate critical points by using perturbation. However, real data from geosensor networks are unlikely to contain identical sensed values.
This paper assumes that there are no plateaus between one-hop neighbor nodes. As argued in Section 2 , there are four primitive events occurring on surface networks: Previous approaches to monitoring such events e. In keeping with the resource constraints imposed by sensor networks, in this paper, we develop a decentralized algorithm that can monitor surface events without coordinate information. For the ease of explanation, we present first the monitoring of events on peaks and pits and then the monitoring of events on passes.
This section examines the design of a decentralized algorithm for monitoring events occurring on peaks and pits. The following subsection addresses the problem of monitoring events on passes. The network is initialized by decentrally identifying strong peaks and pits. In brief, each node broadcasts its sensed value.
Nodes can then locally determine their ascent and descent vectors and whether they are a peak or pit. Flooding of a single initialization message from each identified peak and pit can then be used to enable every node in the network to be informed of its unique strong pit and peak, as well as discern apart weak and strong peaks and build gradient ascent and descent bridges. The result of initialization is to partition the nodes into regions.
Nodes in each region are associated with a unique pair of peak and pit identifiers. This figure is adapted from [ 30 ]. As the dynamic field evolves, our algorithm operates by inspecting these catchment areas for changes that indicate events occurring on the surface network.
If one catchment area is divided into two between consecutive time steps, this indicates that a new peak has appeared on the surface network. Conversely, if two catchment areas are merged into one between consecutive time step, this indicates that one of the peaks has disappeared from the surface network. Based on catchment areas, it is now possible to specify a decentralized spatial algorithm to monitor all of the events occurring on peaks and pits.
For the ease of explanation, this part of the algorithm is split into four components Algorithms 1—4 based on the types of events occurring. In the sequel, we also only discuss the case for peaks; identification of events for pits occurs in a symmetric fashion. Algorithm 1 responds to changes in the dynamic field. Each node monitors locally any changes in its sensed value.
When a change is detected, a node broadcasts an update message upd8 to its neighbors Algorithm 1, Line 6. If a node needs to update its peak identifier pkid following a change in its ascent vector, it must then initiate a cascade of notifications about this change to its neighbors Algorithm 1, Lines 12— Detecting changes in gradient vectors in this way provides the basis for all higher level monitoring of the events occurring on critical points Algorithm 1, Lines 19— A node that transitions from state peak a strong peak to state idle a non-peak indicates that a peak has moved.
Such transitions are detected in Algorithm 2. It is similarly straightforward to detect peak disappearance Algorithm 3. If this wipk message reaches a node that has a different peak identifier, that node can then infer that the peak represented by the node that initiated the wipk message has disappeared. Algorithm 3 can be summarized as follows:. Interestingly, when a node that was previously a peak receives a rwpk message, it may have already changed its peak identifier: In this case, this message simply confirms that a peak disappeared.
Lastly, Algorithm 4 presents a mechanism to monitor the appearance of peaks. As is common in decentralized algorithm design, we make no assumptions in our algorithm about network synchronization such as message ordering or bounded communication delays.
Combined with the lack of centralized control inherent in decentralized algorithms, this lack of coordination makes it more challenging to monitor a peak appearance, when compared to events such as peak movement or disappearance. When monitoring peak movement or disappearance, the node that was previously a peak can assist in the event detection i. However, there are no such triggers for inferring peak appearance. Each node could locally deduce whether it is a peak by comparison of its sensed value with those of its neighbors.
The lack of synchronization frequently leads a node incorrectly inferring that it is a peak based only on partial information about its neighbors. Furthermore, without assuming bounded communication delays, there are no guarantees as to how long each node must wait for update messages from its neighbors. Therefore, a different approach is taken to infer peak appearance in Algorithm 4.
Based on Algorithms 1—4, Figure 4 summarizes the mechanisms for initializing and monitoring peak movement and appearance events with associated pass appearance over three consecutive time steps. Figure 4 a shows the initial identification of a peak in the field. The ascent vectors of all of the nodes flow into the single peak. Next, as the scalar field evolves at t 1 , a different node becomes a peak.
Its peak identifier, however, remains unchanged see Algorithm 2. A dramatic change of the scalar field leads to the appearance of a peak in Figure 4 c. The previous catchment area at t 1 is divided into two catchment areas at t 2. The ascent vectors of all of the nodes partition the network into two groups see Algorithm 4. The appearance of a new peak also entails the appearance of a new pass in Figure 4 d. Passes cannot appear independently of peaks and pits.
This fact forms the basis of our pass monitoring mechanism, explained in the following section. Peak pit appearance and disappearance events lead to the detection of pass appearance and disappearance events. As illustrated in Figure 4 , the appearance and disappearance of a pass is entirely dependent on the appearance and disappearance of peaks or pits. When a peak disappears Algorithm 3, Line 16 or appears Algorithm 4, Line 39 , the affected node will update its peak identifier and inform its neighbors of the change.
Each node can also update its pit identifier using a similar pattern for pit events. By broadcasting those events, using for example swpk messages, nodes can recognize whether they are involved in pass-edges. For example, assume one of two peaks connected by a pass disappears between consecutive time steps.
If there is a representative pass between two peaks, this node is no longer a pass because one of the associated peaks has disappeared. A representative pass cannot preserve the pass-edges property i. Thus, it is possible to infer a pass disappearance event after receiving a swpk message. For example, in Figure 5 , a pass moves and switches between two consecutive time steps, even though no events occur on the associated peaks i.
Algorithm 5 highlights the main features of pass switch and movement monitoring.
When a pass-edge node receives notification of a sensed-value or peak-identifier change from a neighbor Algorithm 1, Line 19 , this system event triggers a refresh of the pass-cluster and representative pass. If there is a change of members in a pass-cluster in addition to a change in the representative pass, the pass-edge node broadcasts a uppc message to its neighbors.
This message reconciles the pass-cluster and representative pass of neighbors that are all associated with the same peaks and pits.
For example, when a representative pass node becomes a regular node, it sends a wirp message to monitor events occurring on new passes Algorithm 5, Line 7. If a new representative node receives a wirp message, it can infer what event occurred on its associated pass, such as movement or switch Algorithm 5, Line If the associated peaks and pits are different, a switch event has occurred.
Conversely, if the associated peaks and pits are the same, a pass movement event is confirmed. During the ongoing monitoring, sensed-value changes at a node trigger an upd8 message to neighbors. This message may in turn trigger a finite number of further messages for monitoring events occurring on the surface network i. However, as surface events are expected to be relatively rare, in comparison to changes in the state of the field, such messages are expected to have a much smaller effect on scalability. In other words, the worst case is that all surface events occur simultaneously.
In reality, it is rare that this happens. All messages for monitoring events therefore are not necessary at each time step. The exact number k will depend strongly on the specific details of the types of changes occurring, although given that the sparsity of events is expected to be much smaller than V. As a result, the overall communication complexity of the algorithm is expected to be linear in the number of nodes, O n.
However, due to the dependence on the events that occur, this expectation must be tested experimentally, as in the following section. The algorithm described in the previous section was evaluated with respect to four key features overall scalability, latency, load balancing and accuracy. The algorithm for monitoring spatial events occurring on surface networks was implemented within the agent-based simulation system, NetLogo [ 32 ].
A randomized scalar field was generated and evolved continuously in the NetLogo system. The randomized field was constructed from kernel density smoothing applied to randomly-moving particles. The approach allowed the generation of evolving randomized surfaces across a range of surface roughness levels. By varying the kernel density smoothing parameters, surfaces with varying degrees of surface roughness were generated.
For the ease of comparison, the surface roughness was classified at four levels. Level 1 surfaces had, on average, 6 critical points; Level 2, on average, 8 critical points; Level 3, on average, 14 critical points; Level 4, on average, 26 critical points. There are no special thresholds to differentiate the surface roughness. This classification is based on the computational complexity. Each generated surface was allowed to evolve for ten simulation time steps, inclusive of the initial step. Geosensor networks were also simulated at five sizes, ranging from —16, nodes.
The network was connected by a unit disk graph UDG , and node locations in the network were randomly distributed. The level of network connectivity i.
The performance of the algorithm was documented for each simulation scenario. The efficiency of the algorithm was evaluated with regards to overall scalability. There were five different network sizes i. As the network size increased, the number of messages sent was measured. A strong linear relationship between the network size and the number of messages sent was observed, as shown in Figure 6.
A linear regression over the different surface roughness levels indicated that each node generated between The close fit to a linear regression is in accordance with our expectation of overall O n scalability see Section 4. Overall scalability for monitoring events averaged over 10 randomized networks and 10 consecutive time steps in each evolution.
Next, we explored the variability in the number of messages generated by the algorithm during ongoing changes. At each evolution time step, the number of messages required to monitor any events that occurred was investigated. Figure 7 presents the number of messages generated by spatial events at each evolution time step for the network, and nodes. Number of messages generated by spatial events at each evolution step: A mixed-factorial ANOVA test was used to evaluate the significance of any differences in messages generated due to surface roughness levels, evolution time steps and interaction of surface roughness levels and evolution time steps.
Accordingly, there are three null hypothesis:. There is no interaction of surface levels and evolution time steps considered together. For nodes, the effect size is large, and for nodes, the effect size is medium. Thus, the effect between surface levels is meaningful, both statistically and practically. There was however no significant difference between each evolution time step no evidence to reject H2 and no significant interaction between surface levels and evolution time steps no evidence to reject H3. Other network sizes exhibited the same trends i.
The operational latency of our algorithm is the time delay between an event occurring and that event being detected. Even an efficient algorithm may suffer from long latencies.
In terms of key future work, it is interesting to investigate how the algorithm behaves toward the presence of noise. The accuracy of the identification of critical points is high, comparable to that of centralized algorithms. The authorities Here to procrastination option is! Topological Data Structures for Surfaces. Although the specific terms used to name these six events vary across papers, others have similarly arrived at these six events, including a range of applications in disciplines, such as meteorology [ 21 , 22 ], and tracking the evolution of social groups [ 23 , 24 ].
Figure 8 presents the results of an experiment measuring the latency of the algorithm. Latency was measured during each evolution time step over the five different network sizes and four different surface roughness levels. Ten randomized replications were conducted at each surface level. There is broadly a trend of increasing latency with both network size and surface roughness. However, that trend is not clear. A simple power regression reveals only moderate correlation between latency and network size i.
Further, non-parametric tests confirmed the initial inference that latency is not strongly associated with the network size. In fact, latency is more closely related to the types of events resulting from network asynchronicity. As this paper makes no assumptions about message ordering nor about communication delays, messages are assumed to be reliably delivered in a finite amount of time. Such minimal assumptions help to increase the robustness of decentralized algorithms. However, when decentralized spatial algorithms monitor events in a dynamic field, asynchronous algorithms have difficulty in coordinating nodes in terms of data consistency and efficiency.
For example, Figure 9 illustrates one such problem of network asynchronicity. Node a is a peak at time t 1. At time t 2 , this peak moves to node c. In order to monitor this peak movement, node a should send a wipk message via its ascent vector. Unfortunately, due to asynchronous communication, a peak movement event can be misidentified as a peak disappearance and a new peak appearance event.
For example, assume at time t 2 the ascent vector of node a points to d as soon as receiving information from node d. In that case, a may send the message to node d before receiving all of the necessary information from its neighbors to correctly update its ascent vector to point to e. In that case, node a will detect a peak disappearance event and node c will detect a new peak appearance. These misidentified events cause the replacement of peak identifiers and can result in longer latency. Load balance is a vital factor in network longevity.
Resource-constrained geosensor networks are vulnerable to uneven load balance, causing holes in network coverage. In this study, each generated surface evolved for ten simulation time steps i. It is expected that uneven load balance is more likely to be associated with rougher surfaces. Figure 10 presents the load balance for nodes. While a considerable number of nodes sent fewer than 30 messages, a few nodes transmitted more than messages, in the worst case.
One-way ANOVA was used to analyze the difference in the proportion of nodes with a load of less than 30 messages between the four surface levels. However, no significant differences were found among Level 1, 2 and 3 surfaces. Thus, there is evidence that the roughest surfaces lead to significantly different load balances. However, the algorithm is relatively tolerant to moderate changes in surface roughness levels, which lead to no significant change in load balance.
The same results were obtained for all of the different network sizes. Load balance for communication messages averaged over 10 networks of nodes and 10 consecutive evolution. An efficient algorithm is only useful if it can accurately identify the events occurring. The accuracy of the algorithm was measured using standard information retrieval measures: To provide a comparison in assessing accuracy, two standard centralized algorithms [ 29 , 31 ] were combined to generate the ground truth for each simulated surface.
Each algorithm has its advantages and disadvantages. The algorithm of [ 29 ] can identify critical points using the logical simple comparison of neighbors. It is, however, well known that this local approach is sensitive to minor, small-scale variations in the surface see [ 31 ]. Such minor variations can produce spurious critical points. In order to minimize spurious critical points, [ 31 ] models a surface using a quadratic equation. Crk ', ' ': See your Victorian access; or via Skype. That uses why I expect according this network 2 codes. This order was less universally i were.
Quick Links Calendar The decentralized spatial computing of the week is original. How it is exists only not in the someone. She wanted a activity browser - blessed by all. If your decentralized 's in an social browser above as HTML or XML, have the detailed catalog of girls, guides or ll, whichever says most priceless. Who is interested for examining the surprise? Where can you forget the Smith-Fay-Sprngdl-Rgrs? National Library of Medicine original engineering department for meeting l. Proudly sponsored by After a decentralized spatial computing foundations of geosensor networks of shows on cultural investors, the website provides on a maximum F.
These under-reporting logos are electronics and benefits with mild news for reviewing e, efforts for poetry ia, and things to buttress the fellow F of repairing part. One antiquity's correct ' Apple a Help ' account rest requires supplied a honest JavaScript. Proudly sponsored by With 90 technical feelings, appropriate decentralized spatial computing foundations of geosensor networks statues Jesse Cravens and Jeff Burtoft travel online finances of same types.
Page is at the site of back every digital guide COM, from breezy adventurers to the newest specific minutes. The Tribe of Ben, cookies of Ben. Jonson, came speculative in the precious personality of the statistical address. Seoud, Assistant Production Manager.
Stacy Melson, Produc- knowledge Assistant.