Notes on Analytical Framework for 21st Century Strategy, Networks
The next few posts will consist of notes from various disciplines, along with links to primary sources.
First up, Emergence of Scaling in Random Networks. Notes:
On usage: Nodes=players=vertices.
New vertices (nodes) attach preferentially to sites that are already well connected.
Implication: An aspiring terrorist will “connect” to a node that is already well-established and widely used – e.g. a popular website, a noted Imam, etc. This means our best technique would be to not destroy the well-connected nodes, but to appropriate them and use them to attract and collect would-be additions to the network. Only the business end can hurt us.
Development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of individual systems.
How do we analyze networks?
1. Connectivity in networks is not random, does not have Poisson distribution.
2. Expectation when studying the WWW was for random network of links. Power-law distribution, not Poisson, was what was found. K^-2 plot on a histogram.
3. Highway system of US is close to random network, relatively uniform network, won’t find Major City with 200 highways connected to it, and won’t find one with zero.
4. Power-law distribution of WWW is much closer to airline network. Very small nodes and a few major hubs.
Hubs hold network together and “dominate the way you navigate.” Never see hubs in random networks.
This is scale-free network. In a random there is a typical node. On a scale-free network, there are no typical nodes. Even the average node is not typical, no intrinsic scale.
A map of the Internet: every node corresponds to router, links correspond to physical line. Most nodes are on the periphery, but a few major hubs that hold the network together.
Social network: Swedish sexual network, sexual connections within the society. This was also a scale-free network, most people had 1-10 partners, but there were a few that had thousands. The thousands-connections people were the hubs, and supported the network of transmission of STD.
In cell, most molecules participate in only one or two reactions (connections). However, there are some that participate in a large amount, like water and ATP, and those hold the entire network together.
Why? Why do all these networks have same properties even when they have drastically different elements?
Number of nodes in a network are never fixed. Networks continuously expand number of vertices. Internet didn’t pop up, but grew organically. (Is it possible for an original hub to decay and die out? – not if the rich-get-richer rule below is satisfied. A hub dies only through system criticality or environmental change. Left in homeostasis, the hub will continue to expand.)
New nodes do not connect randomly, but they prefer to connect to high-connected nodes. Knowledge is biased to a well-connected node. Preferential attachment is the probability that a node connects to a node with ‘k’ links. The higher ‘k’, the higher the probability. If we want to model a network, we have to incorporate both the growth rule and preferential attachment.
Why scale-free, why hubs? The more-connected node will grow faster than the less-connected node, as a node comes into the network, the more-connected node has a stronger probabilistic gravity on it.
Gene-duplication – error in the copy. Occasionally the gene will be copied twice.
If you are a very highly connected protein, there is a great chance that someone you are connected to will duplicate. When that happens, you will then be connected to two instead of one, and your connectivity will grow.
This is how scale-free networks emerge in all organisms.
What are the consequences?
One place where it does matter is robustness. Complex systems are very good at maintaining basic functions if one of their components break down; they have an amazing capacity to function in the face of many errors. Every network has a critical point for a random network, after which it will break apart. In a scale-free network, if you take a very large scale-free network, you can remove 80 percent of the nodes randomly and remaining 20 percent still talk to each other.
However, it has an Achilles heel. It is very robust against random error, but it is very fragile to attacks (where the largest node, second largest node, etc. is taken out). That is how you destroy them.
Highly connected nodes are more essential, and therefore their removal is more lethal.
Implication: Killing terrorists is like a random removal of nodes. The network will be very robust and be able to handle a large-scale removal of this type (up to about 80 percent). However, if a targeted attack on the top nodes began, the network would fall apart very quickly.
Networks are fundamentally formed of modules, groups of nodes that connect very tightly together, and some links between them. Social networks are great examples of this. Links within modules much denser than links between modules. How could you have hubs that connect to everybody and also have modularity?
Clustering coefficient tells how well the node connected to 'me' knows the other nodes that connect to 'me'. For hub, clustering coefficient is small because you are connected to groups that do not know each other. Hierarchy of clustering coefficient is present in biological and social networks.
Small nodes have high clustering coefficient, hubs have low clustering coefficient. Communications patterns should give you the information on which are hubs, and which are modules with high-clustering coefficients?
What about the strength of connections? How are they distributed, and how do they affect network structure? Strength of reaction is flux of reaction, how many molecules are produced. Blue is cold, red is hot. Function of: How many messages sent, how many times connections are used. Majority of connections are are weak, a few are strong. You have a few friends you see often, many friends you see rarely. If you want to gain information about job opportunities, you have to reach out to weak lengths because they will be connected to other nodes you don’t know about.
How do we look at the way the cell is adapting to different environments. You have a different connective pattern during summer, as during school year. If you change environment, then the fluxes will change. Flux elasticity, you have certain reactions that turn on, and certain that turn off. In this way we must take into account connective potential in addition to real-time connections.
Is there any reaction that has to be active no matter what environment the network is in. There is indeed a “core”, a group of reactions that will always be active no matter where you put them. In a random network the number of reactions that are always on should approach zero as you increase the number of environments. In reality, there is a saturation, which means that a core-group develops.
The larger the network, the smaller the core, a collective network effect.
Universality of networks at highest organizational abstraction -- topologically they behave the same.
Universality decreases as organism specificity increases (the differences between WWW, Cell, etc. -- properties that are specific to an organism cannot be universal). The way information is stored is specific (DNA, bits, etc.), the way it is processed is shared (topological network properties are universal).
First up, Emergence of Scaling in Random Networks. Notes:
On usage: Nodes=players=vertices.
New vertices (nodes) attach preferentially to sites that are already well connected.
Implication: An aspiring terrorist will “connect” to a node that is already well-established and widely used – e.g. a popular website, a noted Imam, etc. This means our best technique would be to not destroy the well-connected nodes, but to appropriate them and use them to attract and collect would-be additions to the network. Only the business end can hurt us.
Development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of individual systems.
How do we analyze networks?
1. Connectivity in networks is not random, does not have Poisson distribution.
2. Expectation when studying the WWW was for random network of links. Power-law distribution, not Poisson, was what was found. K^-2 plot on a histogram.
3. Highway system of US is close to random network, relatively uniform network, won’t find Major City with 200 highways connected to it, and won’t find one with zero.
4. Power-law distribution of WWW is much closer to airline network. Very small nodes and a few major hubs.
Hubs hold network together and “dominate the way you navigate.” Never see hubs in random networks.
This is scale-free network. In a random there is a typical node. On a scale-free network, there are no typical nodes. Even the average node is not typical, no intrinsic scale.
A map of the Internet: every node corresponds to router, links correspond to physical line. Most nodes are on the periphery, but a few major hubs that hold the network together.
Social network: Swedish sexual network, sexual connections within the society. This was also a scale-free network, most people had 1-10 partners, but there were a few that had thousands. The thousands-connections people were the hubs, and supported the network of transmission of STD.
In cell, most molecules participate in only one or two reactions (connections). However, there are some that participate in a large amount, like water and ATP, and those hold the entire network together.
Why? Why do all these networks have same properties even when they have drastically different elements?
Number of nodes in a network are never fixed. Networks continuously expand number of vertices. Internet didn’t pop up, but grew organically. (Is it possible for an original hub to decay and die out? – not if the rich-get-richer rule below is satisfied. A hub dies only through system criticality or environmental change. Left in homeostasis, the hub will continue to expand.)
New nodes do not connect randomly, but they prefer to connect to high-connected nodes. Knowledge is biased to a well-connected node. Preferential attachment is the probability that a node connects to a node with ‘k’ links. The higher ‘k’, the higher the probability. If we want to model a network, we have to incorporate both the growth rule and preferential attachment.
Why scale-free, why hubs? The more-connected node will grow faster than the less-connected node, as a node comes into the network, the more-connected node has a stronger probabilistic gravity on it.
Gene-duplication – error in the copy. Occasionally the gene will be copied twice.
If you are a very highly connected protein, there is a great chance that someone you are connected to will duplicate. When that happens, you will then be connected to two instead of one, and your connectivity will grow.
This is how scale-free networks emerge in all organisms.
What are the consequences?
One place where it does matter is robustness. Complex systems are very good at maintaining basic functions if one of their components break down; they have an amazing capacity to function in the face of many errors. Every network has a critical point for a random network, after which it will break apart. In a scale-free network, if you take a very large scale-free network, you can remove 80 percent of the nodes randomly and remaining 20 percent still talk to each other.
However, it has an Achilles heel. It is very robust against random error, but it is very fragile to attacks (where the largest node, second largest node, etc. is taken out). That is how you destroy them.
Highly connected nodes are more essential, and therefore their removal is more lethal.
Implication: Killing terrorists is like a random removal of nodes. The network will be very robust and be able to handle a large-scale removal of this type (up to about 80 percent). However, if a targeted attack on the top nodes began, the network would fall apart very quickly.
Networks are fundamentally formed of modules, groups of nodes that connect very tightly together, and some links between them. Social networks are great examples of this. Links within modules much denser than links between modules. How could you have hubs that connect to everybody and also have modularity?
Clustering coefficient tells how well the node connected to 'me' knows the other nodes that connect to 'me'. For hub, clustering coefficient is small because you are connected to groups that do not know each other. Hierarchy of clustering coefficient is present in biological and social networks.
Small nodes have high clustering coefficient, hubs have low clustering coefficient. Communications patterns should give you the information on which are hubs, and which are modules with high-clustering coefficients?
What about the strength of connections? How are they distributed, and how do they affect network structure? Strength of reaction is flux of reaction, how many molecules are produced. Blue is cold, red is hot. Function of: How many messages sent, how many times connections are used. Majority of connections are are weak, a few are strong. You have a few friends you see often, many friends you see rarely. If you want to gain information about job opportunities, you have to reach out to weak lengths because they will be connected to other nodes you don’t know about.
How do we look at the way the cell is adapting to different environments. You have a different connective pattern during summer, as during school year. If you change environment, then the fluxes will change. Flux elasticity, you have certain reactions that turn on, and certain that turn off. In this way we must take into account connective potential in addition to real-time connections.
Is there any reaction that has to be active no matter what environment the network is in. There is indeed a “core”, a group of reactions that will always be active no matter where you put them. In a random network the number of reactions that are always on should approach zero as you increase the number of environments. In reality, there is a saturation, which means that a core-group develops.
The larger the network, the smaller the core, a collective network effect.
Universality of networks at highest organizational abstraction -- topologically they behave the same.
Universality decreases as organism specificity increases (the differences between WWW, Cell, etc. -- properties that are specific to an organism cannot be universal). The way information is stored is specific (DNA, bits, etc.), the way it is processed is shared (topological network properties are universal).
0 Comments:
Post a Comment
<< Home