开发者

Algorithm to calculate the most energy efficient ad-hoc network

开发者 https://www.devze.com 2022-12-22 08:41 出处:网络
I have a (theoretical) network with N nodes, each with their own fixed location. Each node sends one message per cycle, which needs to reach the root either directly or via other nodes.

I have a (theoretical) network with N nodes, each with their own fixed location. Each node sends one message per cycle, which needs to reach the root either directly or via other nodes.

The energy cost of sending a message from node A to node B is the distance between them, squared.

The challenge is how to link these nodes, in a tree format, to produce the most energy efficient network.

E.g. Here are 2 possible ways to link these nodes, the left one being more energy efficient.

I'm working on a Genetic algorithm to solve the problem, but I was wondering if anyone had any other ideas, or is aware of any relevant open-source code.

Edit: Another aspect of the network, which I forgot to mention, is that each node is battery powered. So if we have too many messages being routed via the same node, then that node's battery will become depleted, causing the network to fail. The network's energy efficiency is measured by how many messages can be successfully transmitted from each node to the root before any node runs out of battery.

Edit #2: I'm sorry about the ommission in the original text of the question. Becasue of it, some of your earlier answers aren't quite what I'm looking for, but I wasn't familiar with the MST algorithms, so thanks for telling me about them.

In the hope of making things clearer let me add this:

Al开发者_如何学Gol nodes send one message of their own per cycle, including inner nodes. Inner nodes are also responsible for relaying any messages that they receive. This adds to the strain on their battery, is if they were sending an additional message of their own. The aim is to maximise the amount of cycles acheived before any node's battery dies.


I would think that you can construct the complete graph and then use Dijkstra's shortest path algorithm on each node to find the least cost route for that node. The union of these paths should form the minimal cost tree.

Note that with Dijkstra's algorithm it is also possible to calculate the entire tree in one pass.


Without taking account the batteries minimization, what you're looking for is the Shortest Path Spanning Tree, which is kind of similar to the Minimum Spanning Tree, except for with a different "cost" function. (You can just run Dijkstra's Algorithm to calculate the Shortest Path Spanning Tree, since the cost seems to always be positive.)

This does not take account the battery minimization though. In that case, (and I'm not quite sure what it is that you're trying to minimize first) you might want to look into Min-cost max flow. However, this will optimize (maximize) the "flow" first, before minimizing the cost. This might or might not be ideal.

To create the vertex constraint (each node can only operate k messages), you just need to create a second graph G_1 where you add an additional vertex u_i for each v_i - and having the flow v_i to u_i be whatever your constraint be, in this case, k+1, with the cost 0. So basically if there is an edge (a,b) in the original graph G, then in G_1, there will be an edge (u_a,v_b) for each of these edges. In effect, you're creating a second layer of vertices that constraints the flow to k. (Special case for the root, unless you also want a message constraint on the root.)

The standard max-flow solution on G_1 should suffice - a source that points to each vertex with a flow of 1, and a sink that is connected to the root. There is a solution for G_1 (varying on k) if the maxflow of G_1 is N, the number of vertices.


This is not just a minimum spanning tree, because the weight of each edge is dependent on the weight of other edges. Also, you need not to minimize the sum of weights but the maximum weight on a single node, which is the weight of its output edge, multiplied by the number of incoming edges plus one.

Each node will have to transmit a number of messages, but if you route messages from outer nodes through inner nodes, the inner nodes will transmit a higher number of messages. In order to equalize the battery drain over all nodes, the inner nodes will have to use much shorter connections than the outer nodes; I suspect that this dependency on the distance from the root is exponential.

In your examples, it is not so clear whether the left one is more efficient by the measure you gave (maximum number of messages), because while the node at (1,2) does have less energy consumption, the one at (0,1) doubles its output.

I believe that you have to start with some tree (e.g. the one formed by having each node transmit directly to the root node) and then do a number of optimization steps.

The optimization might be possible deterministically or through a statistical method like simulated annealing or a genetic algorithm.

A deterministic method would perhaps try to find an improvement for the current worst node, such that the new node weights are smaller than the current maximum node weight. It is difficult to do this in such a way that the result is the global optimum.

Simulated annealing would mean to change a number of nodes' targets at each step, but this might be hampered by the fact that the "goodness" of a configuration is determined by its worst node. You would need to make sure that the worst node is sufficiently often affected in the candidate children, which might be difficult when the temperature drops.

In a genetic algorithm, you would design the "genome" as a mapping of each node to its target node. A punctual mutation would consist of changing one node's target to a random node (but only the root and nodes that are closer than the root should be considered).


You can try formulating the problem as a minimum-cost maximum-flow problem. Just some ideas:

Create an additional dummy node as the source, and connect edges of zero cost and capacity 1 from this node to every non-root node. Then set the root at the sink, and set all edge costs as you want (the square of the Euclidean distance, in this case).

If you want to also account for energy efficiency, you can try to add a weight for it into the edge costs going into each node. I'm not sure how else you can do it, since you're trying to optimize two objectives (cost of message sending and energy efficiency) at the same time.


I'm wondering if you are using a dynamic wireless sensor network (composed of Telos sensors, for instance)? If this is the case, you're going to want to develop a distributed min-distance protocol rather than something more monolithic like Dijkstra.

I believe you can use some principles from an AHODV (http://moment.cs.ucsb.edu/AODV/aodv.html) protocol, but keep in mind that you'll need to augment the metric somehow. Hop count has a lot to do with energy consumption, but at the same time, you need to keep in mind how much power is being used to transmit a message. Perhaps a start of a metric might be the sum of all power usages at each node on a given path. When your code is setting your network up then, you simply keep track of the path cost for a given "direction" of routing and let your distributed protocol do the rest at each node.

This gives you the flexibility to toss a bunch of sensors in the air and wherever they land the network will converge on the optimal energy configuration for message passing.


Have you considered using a directed acyclic graph instead of a tree? In other words, each node has multiple "parents" that it can send messages to -- the acyclic requirement ensures that all messages eventually arrive. I ask because it sounds like you have a wireless network and because there's an efficient approach to computing the optimum solution.

The approach is linear programming. Let r be the index of the root node. For nodes i, j, let cij be the energy cost of sending a message from i to j. Let xij be a variable that will represent the number of messages sent by node i to node j in each time step. Let z be a variable that will represent the maximum rate of energy consumption at each node.

The linear program is

minimize z
subject to
# the right hand side is the rate of energy consumption by i
(for all i) z >= sum over all j of cij * xij
# every node other than the root sends one more message than it receives
(for all i != r) sum over all j of xij == 1 + sum over all j of xji
# every link has nonnegative utilization
(for all i, j) xij >= 0

You can write a code that generates this LP in something very much like this format, whereupon it can be solved by an off-the-shelf LP solver (e.g., the free GLPK).

There are a couple of features of the LP worth mentioning. First, it may seem odd that we haven't "forced" messages to go to the root. It turns out that as long as the cij constants are positive, it just wastes energy to send messages in cycles, so there's no point. This also ensures that the directed graph we've constructed is acyclic.

Second, the xij variables are in general not integers. How do we send half a message? One possible solution is randomization: if M is the total rate of messages sent by node i, then node i sends each message to node j with probability xij/M. This ensures that the averages work out over time. Another alternative is to use some sort of weighted round-robin scheme.


minimum spanning tree? http://en.wikipedia.org/wiki/Minimum_spanning_tree


I worked on a similar problem, but with wireless sensors. We used PEGASIS (Power Efficient Gathering in Sensor Information System), which is an energy-efficient protocol. http://www.mast.queensu.ca/~math484/projects/pastprojects/2005/presentation05_Yi_Wei.ppt [http://www.technion.ac.il/~es/Professional/Routing_Protocols_for_Sensor_Networks.ppt][2]

0

精彩评论

暂无评论...
验证码 换一张
取 消