Abstract
We explore the block nature of the matrix representation of multiplex networks, introducing a new formalism to deal with its spectral properties as a function of the inter-layer coupling parameter. This approach allows us to derive interesting results based on an interpretation of the traditional eigenvalue problem. Specifically, our formalism is based on the reduction of the dimensionality of a matrix of interest but increasing the power of the characteristic polynomial, i.e, a polynomial eigenvalue problem. This approach may sound counterintuitive at first, but it enable us to relate the quadratic eigenvalue problem for a 2-Layer multiplex network with the spectra of its respective aggregated network. Additionally, it also allows us to derive bounds for the spectra, among many other interesting analytical insights. Furthermore, it also permits us to directly obtain analytical and numerical insights on the eigenvalue behavior as a function of the coupling between layers. Our study includes the supra-adjacency, supra-Laplacian and the probability transition matrices, which enables us to put our results under the perspective of structural phases in multiplex networks. We believe that this formalism and the results reported will make it possible to derive new results for multiplex networks in the future.

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
Complex network theory has become one of the main tools for the analysis of complex systems, allowing the representation of a wide range of systems composed by interacting discrete elements [1]. However, real-life systems are also organized in layers, which represent different channels of interaction. In order to incorporate these characteristics, one should work with multilayer networks, which allows for a proper representation of multiplex and interconnected systems [2–4]. The introduction of this extra level of complexity also imposes new challenges on the analysis of its structural and dynamical properties. Furthermore, a key element on the analysis of networks is their spectral properties [5]. In fact, they play an important role in explaining the connection between structure and dynamics. For instance, in epidemic spreading the critical point below which the infection prevalence is null is predicted to be the inverse of the leading eigenvalue of the adjacency matrix, in both, single [6] and multiplex networks [7]. Additionally, its nature also seems to be connected to those properties [7–9]. Although the literature about the spectra of single-layer networks is well developed [5], the theory of spectral properties of multiplex networks is still in its infancy. This motivates us to propose a different formalism aimed at filling this gap.
In this paper, we will consider the matrix representation of multiplex networks, constraining ourselves to finite matrices. First of all, we are interested in weighting differently inter and intra-layer edges. This implies that those matrices will also be a function of the inter-layer coupling parameter, here called p. Consequently, the associated eigensystem will be a function of that same parameter. Additionally, the matrix approach is especially interesting in this context since it allows us to directly use linear algebra and spectral graph results already available.
When varying the coupling parameter, a multiplex system might present different structural phases, which are characterized in terms of eigenvalue crossings and eigengaps and are intuitively defined as: (i) decoupled phase, for small values of p, where the layers are virtually decoupled and act by themselves, with a negligible interaction between layers, (ii) multiplex/multilayer phase, where the system is coupled and the intra-layer edges play an important role and (iii) the aggregate network phase, where the system behaves as the superposition of all layers. It is clear that a good understanding of the eigenvalues' behavior might be useful since we could move our system into different structural regimes, aiming at different goals such as improved robustness, better performance regarding diffusion or spreading, among many other possible applications.
The different structural phases above are related through the interlacing properties of quotient graphs [10, 11]. More specifically, in [10] the authors showed that the spectra of different scales of a multiplex network (aggregated network, the network of layers and individual layers) characterize the three phases. In practical terms, the interlacing provides us bounds for the spectra [10, 11], but also emphasizes that the different scales are intrinsically connected. Indeed, it is impossible to tune the leading eigenvalue of the network of layers without also increasing the leading eigenvalue of the whole multiplex. Furthermore, in [12] the authors characterized multiple topological scales using the supra-Laplacian matrix. More specifically, they analyzed eigengaps to characterize them.
Following a different approach, in [13], the author evaluated the normalized Laplacian matrix, which, in fact, shares the same set of eigenvalues of the probability transition matrix (see section 4.3), and proposed a similar classification. However, it is worth mentioning that in [13], a different nomenclature was used and a fourth phase was defined. Namely, the proposed structural regimes were: (i) bipartite phase, (ii) decoupled phase, (iii) indistinguishable, where the author argues that the system is topologically and dynamically indistinguishable[13], and (iv) a mixed phase, (called BD in [13]—bipartite and decoupled phases) where the layers are structurally and dynamically distinguishable. Although here we do not make a distinction between the regimes (iii) and (iv) and consider both as multiplex regimes, we acknowledge the differences pointed out in [13]. It is also noteworthy that [13] considered structural correlations for the analysis, which is a key ingredient for the reported results. On the other hand, here we focus on uncorrelated networks and a multiplex structure.
The paper is organized as follows. In section 2, we present the polynomial eigenvalue formalism, giving its general definitions and properties in section 2.1. Next, we formalize the 2-Layer problem into a quadratic eigenvalue problem in section 3, analytically exploring its behavior as a function of the coupling parameter. In section 3.1, obtaining some bounds, in section 3.2, and discussing the simplified symmetric problem in section 3.3. We present our main applications in section 4, where we explore the supra-Laplacian matrix, in section 4.1, the supra-adjacency matrix in section 4.2 and the probability transition matrix in section 4.3. To round off this paper, we discuss the physical consequences of our findings, summarize our main results and perspectives in section 5.
2. Polynomial eigenvalue problem
In this section, we formally define the polynomial eigenvalue problem and present some of its fundamental properties. The aim of this section is to generically define our main mathematical object, establishing its basic properties. Thus, this will allow us to properly study the matrices associated with 2-Layer multiplex networks, which will naturally appear as a consequence of a simple manipulation of a linear system describing the network. From this simple approach, we expect to provide a different perspective on the spectral properties of multiplex networks.
2.1. General definition and properties
A matrix polynomial of order l is a matrix-valued function of a complex variable of the form [14]
where are n × n matrices and they are said coefficient matrices. If the identity matrix the polynomial is said to be monic. The eigenvalues of are the solution to the characteristic equation
Therefore, right and left eigenvectors are defined as
where x and y are the right and left eigenvectors associated to the eigenvalue λ. It reduces to the standard eigenvalue problem when is a monic matrix polynomial with l = 1 and , for any matrix . A generalization of the Jordan form theory to a general matrix polynomial is possible and is briefly presented in the appendix.
A particular case of general interest is the quadratic eigenvalue problem (QEP), which is directly related to the 2-Layer case of our interest in this work. A quadratic matrix polynomial can be written as [14–16]
Besides, without loss of generality, we assume that the eigenvectors are unitary. However, observe that if the right eigenvector x is unitary, thus, the left eigenvector y is not. Therefore, here we assume that x is unitary, which simplifies the equations.
Furthermore, if and are Hermitian a special class of problems, called hyperbolic quadratic eigenvalue problem (HQEP) [16], is obtained. Unfortunately, most of the problems in our context does not fall in this class.
3. A general 2-Layer case: a block matricial problem
The general form of any matrix associated to a multiplex network composed by two layers (supra-adjacency, supra-Laplacian and transition matrices for example) can be written as a block matrix. The resulting eigenvalue problem is the following
where . Recognizing it as a system of equations and isolating v1 on the second row, we have
Thus, inserting it in the first row we have
This expression defines a QEP, whose coefficient matrices are
which poses a restriction on the inter-layer coupling matrix , i.e., it must be invertible. Additionally, in our context, exchanging and we do not change the system, nor their solutions. Note that this operation is equivalent to relabeling the layers. However, denoting the polynomial of the first, thus, for the second case the polynomial is simply . In this way we also establish a relationship between the right and left eigenvectors and these two possible configurations. Formally, it implies that x = v2 and y = v1. As usual, we consider coupling matrices that are functions of a coupling parameter, p, i.e., , for . In fact, throughout this paper we explore how the spectral properties of our network evolve as we change such coupling parameter. As a constraint, we should mention that we only consider finite matrices.
Furthermore, note that in equation (9) is intimately related to the aggregated and the loop-less aggregated networks of the original multiplex network (for more, see [10] or section 2.3.2 of [17]). More specifically, if the coupling matrices and are the identity matrix (or proportional to this matrix), thus , which is proportional to the loop-less aggregated network. Besides, note that a network of layers in the two-layer multiplex is a simple line graph with two nodes.
3.1. Spectral analysis as a function of p
So far, we made as less constraints as possible. Now we restrict ourselves to multiplex networks, i.e. diagonal coupling matrices, and, additionally, we assume a linear function of the parameter p > 05 , , where is a diagonal invertible matrix (this constraint will be relaxed later). Then, we can describe each eigenvalue of by a scalar equation defined as the product of , by its left and right eigenvectors. Formally given as
where and . The solution of this equation is given by
where . Note that for each pair of right and left eigenvectors we have two possible solutions, but just one of them is an eigenvalue of . Furthermore, differentiating equation (12) by p we obtain information on how the eigenvalues change as a function of p. Formally we have
where
Note that the eigenvalues and eigenvectors are also a function of p. For continuity, two different eigenvalues may cross each other when varying p. Observe that for non-crossing points the relations and holds, since the derivatives are bounded for non-crossing points. Note that on the eigenvalue crossings we have two eigenvectors associated to the same eigenvalue. Thus, two solutions for the derivatives. Next, isolating the derivative of λ we have
In practical terms, this relation can be applied to drive a system through different structural or dynamical regimes. For example, considering the adjacency matrix, one can use this equation in order to chose an edge or set of edges to be removed (or weighted) in order to optimally reduce or increase the leading eigenvalue. As a consequence, the critical point of spreading processes, such as epidemic spreading, would also change. Obviously the matrix under study depends on the process. Another application is to design a numerical method to follow the correct eigenvalues as a function of p in a problem that might present eigenvalues crossings [7, 18].
3.2. Bounds
If x is an eigenvector (left or right), then holds and we can use it to find bounds to equation (13). Multiplying by , we get a monic polynomial, then if we bound and , both solutions of equation (13) will also be bounded. Thus, to bound those terms, we can use the numerical range of the matrices to which they are related. The numerical range is formally defined for any matrix as . Additionally, , where is in the set of eigenvalues of . Additionally, if is an Hermitian matrix is the Rayleigh quotient of , which implies . Finally, to bound non-Hermitian matrices we use the relation of the spectral norm and the numerical range, given as , where is its numerical radius, defined as .
First, the term b(xT, x) is bounded by
however, in many cases, is an Hermitian matrix, which allow us to improve the bound as
Interestingly, we remark that is often related to the aggregated network, connecting both scales of the same structure.
Next, we evaluate . Firstly, we analyze the term , by observing that: (a) , (b) and (c) , since , hence, from (b) and (c), bounding is equivalent to bound . Secondly, we can factorize and defining the matrix , we can focus on the problem instead of the initial definition of , since both have the same bounds. In addition, since frequently we are interested in undirected networks and consequently symmetric matrices, we can also constraint since we already know that the spectra is real in this case. Therefore, we have
Observe that those bounds can be further improved when applied to the analysis of particular matrices (supra-adjacency, supra-Laplacian, and probability transition) since their particularities also impose constraints on the solutions and could be explored to improve the bounds.
3.3. Comments on symmetric problems: HQEP
As previously mentioned, if and are Hermitian, we have a special class of problems called HQEP [16]. The HQEP has interesting properties, for instance, if x is a right eigenvector associated with the eigenvalue λ, then it is also a left eigenvector of the same eigenvalue [16]. In order to take advantage of those properties one can interpret the original problem as an HQEP plus asymmetric perturbation. Thus, the matrix polynomial defined by the matrix coefficients in equation (9) is, in general, asymmetric. However, a class of problems that arise naturally is defined by . In this case, the matrices and are Hermitian. Observe that still might be asymmetric. Nonetheless, we can use the Toeplitz decomposition [19] in order to analyze a symmetric and simplified problem. This decomposition states that any square matrix can be uniquely written as the sum of an Hermitian () and a skew Hermitian matrix () as . This allows us to decompose . In this way, we can re-write our original QEP into two parts, a HQEP, composed only by Hermitian matrices and a skew Hermitian matrix, that can be interpreted as a perturbation. As a consequence from the perturbation theory, the matrix of the HQEP is perturbed by and such matrix norm goes to zero as the layers are more similar. From the Bauer and Fike theorem [19] we can write a quality function for the approximation of the perturbed matrix as
where is the eigenvalue of (Hermitian, , and skew-symmetric, ), and is the condition number with respect to the matrix norm . Considering the spectral norm we have . If is near 1, small perturbations imply small changes on the eigenvalues. Conversely, large values of suggest a poor approximation. Note that, although this analysis concerns only the matrix and not the whole QEP, it can be an estimate of the quality of the approximation.
In addition to the HQEP properties, the perturbation analysis also emphasizes an important multiplex property. We must note that the more similar the layers are, the closer to zero the norm is. Moreover, we also have another criteria which is based on the commutativity of the matrices and . Moreover, observe the role of correlations in this approximation. If both layers are identical, they are obviously correlated and the problem is symmetric.
3.4. Singular and the limits for sparse inter-layer coupling
So far we have assumed a node-aligned multiplex, i.e., a multiplex network in which each node has a counterpart on every layer[2], fulfilling the invertibility of , which is a necessary condition to formally define the problem. Nevertheless, we can use the limit of to obtain an approximation of the sparse coupling. Thus, note that equation (5) can be analyzed in two different steps, firstly calculating the limit of decoupled edges and then the rest of the system. The first limit is analyzed as follows. From 5 the absent edges are factorized as
where . Multiplying equation (21) by and using the following limit
we have
where the term of order vanishes in the limit of . Note that if both layers are decoupled. Therefore, the factorization of the polynomial equation yields to , whose solutions are the union of the solution of the standard eigenvalue problem of each layer. An important consequence is that the number of eigenvalues that do not change as a function of p are twice the number of nodes that do not have a counterpart on other layer.
Equation (23) give us the solutions for nodes without a counterpart on the other layer. To calculate the remaining solutions we have to redefine the original problem in terms of the Moore–Penrose pseudoinverse, which is denoted by , for a general matrix . Thus, denoting by , we have if and otherwise. Observe that the zeros of are ones in . For simplicity, in the rest of the paper we assume that is invertible, however, the strategy above presented can be applied in case it is not. From the computational viewpoint, the cost of calculating the whole spectra for a closed range of p might be reduced since we can separate it into two parts: (i) a constant subset and (ii) the remaining subset, which varies as a function of p.
4. Applications
We next apply our proposed formalism to study the supra-Laplacian and the supra-adjacency matrices. For the sake of completeness, let us explicitly define the supra-adjacency matrix in terms of its block matrices (adjacency matrix of the individual layers). Formally, the supra-adjacency matrix is defined as
where we weight differently the intra and inter-layer edges. The definition of the supra-Laplacian matrix is
where is a diagonal matrix whose elements are and the Laplacian matrices of the individual layers are denoted as and . Many dynamical processes are described using those matrices. For instance, synchronization of coupled oscillators and also diffusion processes, while the supra-adjacency matrix is intimately related to epidemic and information spreading. It is also noteworthy that many structural metrics are also directly extracted from the spectral properties of those matrices. For instance, the communicability, which can be easily written as a matrix function, or more specifically, as the exponential of the adjacency or supra-adjacency matrix. Here we are going to focus on the spectral properties of these matrices, and their behavior as a function of the coupling parameter p under different conditions.
In addition to the supra-adjacency and supra-Laplacian matrix, we also analyze the probability transition matrices in section 4.3, which can be used to describe classical random walks on networks. The analysis of such a matrix is left to the last section since it is mainly numerical. Note that the probability transition matrix has a well bounded spectra where [20]. This characteristic imposes an extra challenge on the derivation of the bounds. Although we could not improve those bounds, we report an interesting spectral behavior found numerically.
4.1. Supra-Laplacian matrix
The simplest supra-Laplacian matrix can be built considering a diagonal coupling matrix , where each node has a counterpart on the other layer and the coupling is homogeneous. This implies that the QEP is defined with the following coefficient matrices
It is noteworthy that the aggregated network, , appears naturally under this formalism. This is interesting since it is physically understandable. On the other hand, the term , in the definition of , is of not so direct interpretation. This system presents a structural transition, which can be directly derived from our formalism. This derivation is presented in section 4.1.1. Additionally, we can also obtain bounds for the spectra using the ideas discussed in section 3.2. Those improved bounds are derived in section 4.1.2, where we use the particular properties of a Laplacian matrix to improve our previous results. In section 4.1.3, we evaluate the spectra of the supra-Laplacian matrix as a function of p and also compare our previous results with sparse and heterogeneous couplings. Specifically, on the heterogeneous case we consider a coupling matrix , where is a diagonal matrix. The QEP of such matrix is defined by and . The analysis of such QEP is not trivial, since the matrices are asymmetric. Instead, we can explore it numerically and compare with the homogeneous case, .
4.1.1. Structural transitions
First of all, we discuss the structural transition on the Laplacian matrix, initially presented in [21]. Here we calculate the exact transition points using the QEP formulation, which can be easily derived. We remark that those transition points were also calculated in [18], however, using two different methods: (i) eigenvalue sensitivity analysis and (ii) the Shurs complement approach. Both derivations presented in [18] are quite complicated, unlike our approach, where the solutions are obtained using simple arguments. Note, instead, that our approach presents a different expression if compared to the expression in [18]. Although their equivalence was not mathematically proven, it was numerically verified.
To begin with, it is well known that is an eigenvalue of the supra-Laplacian. Moreover, the crossing points are a consequence of the crossing between this eigenvalue and the bounded part of the supra-Laplacian spectra, yielding to the so-called structural transitions. Thus, from our defined QEP, we have that
which has two possible solutions: (i) , which is always true, since the sum of the Laplacian matrices of the layers is the Laplacian of the aggregated network, hence, it also has a null determinant and (ii) the solution of , which are the crossing points or eigenvalues of multiplicity larger than one. Since this expression is also an eigenvalue problem, but in terms of p, we can express the crossing points as . Note that there are N possible values of p that solve equation (29), each one representing one crossing. The first solution is trivial, at p = 0, the second is the so-called structural transition [21]. We also remark its relevance in some dynamical processes, as presented in [22]. As said before, this expression is different from the previous one presented in the literature, however both give the same result.
4.1.2. Bounds
Here, exploiting the ideas presented in section 3.2, we improve the former bounds using specific Laplacian properties, such as its semi-positiveness. Thus, the QEP of the supra-Laplacian can be bounded using the individual bounds of , which is a semi-positive definite Hermitian matrix, leading to
Additionally, the discriminant function can also bounded by
where the upper bound can be defined as a function of the spectral properties of . On the other hand, the lower bound can be further improved by realizing that the matrix , defined on section 3.2, is semi-positive definite for undirected networks, . In this way, , hence , implying that 6 .From these properties, we can establish the lower bound as 4p2. Formally,
The previously obtained bounds imply that, in the asymptotic analysis, we have . Moreover, note that lower and upper bounds converge to each other as the layers become similar. On the extreme case of identical layers we have . Finally, combining the previously obtained bounds we have,
and
Particularly, we can analyze these bounds in terms of their asymptotic behavior (approximation). Thus, for a sufficiently large value of p they can be approximated to
Therefore, from the asymptotic point of view we have and .
Figure 1 is an example of the eigenvalues as a function of the coupling parameter p for a multiplex network composed by two Erdös–Renyi layers. On the first layer we have an average degree , while the second has and both layers have nodes.
Figure 1. The supra-Laplacian eigenvalues, , varying the coupling parameter p for a multiplex network composed by two Erdös–Renyi layers with n = 103 nodes. The first layer has an average degree of and the second of . The upper bounds and lower bounds are dashed and continuous lines, respectively.
Download figure:
Standard image High-resolution image4.1.3. Spectral properties as a function of the coupling parameter p
In this section, we focus on the eigenvalues behavior as a function of the coupling parameter, . First of all, we apply the concepts of section 3.1 regarding the derivative of Q(λ). Consider the simplest case, where . In this case, we have a monic polynomial matrix, where depends on the aggregated network, which is semi-positive definite. Moreover, is a matrix that contains the product of the Laplacian of both layers, accounting the similarities between them. In this way, equation (16) can be expressed as
where and is the cosine of the angle between left and right eigenvectors of our QEP. Observe that part of the spectra has , while the other part has as p increases, which can be proved as follows. First of all, let us suppose that λ is a constant function of p, then since the denominator grows as a function of p and the numerator is bounded, by supposition. Second of all, let us suppose that λ grows with pr, where r < 1. Thus, , using the same arguments, since the linear function of the denominator dominates it. On the other hand, if r = 1 we have , since both, the numerator and the denominator, grow linearly. Finally, with pr, where r > 1, both the denominator and numerator are dominated by pr, implying that, for large p, the derivative , which is also a contradiction, as it was supposed to be a linear function of p. Thus, it implies that the derivatives of λ, for large values of p, cannot grow faster than linearly and their growth will be one of two values, 0 or 2. Interestingly, note that these results are in alignment with the previously obtained bounds. In addition, such behavior is exemplified in figure 1.
Although for the simplified case we have two possible solutions at large p, observe that the above arguments fail for the case of general coupling matrix. From equation (16) and the definition of the Laplacian QEP we conclude that only the denominator of equation (16) changes for a different choice of , since the terms that have dependencies on vanish in the partial derivatives of the numerator. The denominator follows the general form . In this way, different coupling weights can change the behavior of each eigenvalue differently for large p. For instance, if the spectral distribution for large p is bimodal, however, if 7 , this behavior changes completely and the eigenvalues change with different rates, presenting a 'continuous' bulk. This argument is valid for infinity size networks, since for finite size networks for a large p gaps between eigenvalues may appear due to different rates of growth (as a function of p). An example of this is shown in figure 2. Furthermore, we also found an empirical function that seems to bound the spectra as a function of p in this experiment. The lower bound is trivial, since it is a semi-positive definite matrix. The upper bound can be obtained correcting , hence
From figure 2 we observe that such bound is not as close to the largest eigenvalue as the homogeneous case.
Figure 2. The supra-Laplacian eigenvalues, , varying the coupling parameter p, considering the coupling matrix —heterogeneous coupling. The dashed line is the adapted upper bound. The network considered in this figure has the same layers as used in figure 1.
Download figure:
Standard image High-resolution imageFigure 3. The supra-adjacency eigenvalues, , varying the coupling parameter p. The upper bounds and lower bounds are dashed and continuous lines, respectively. The network considered is the same as in figure 1.
Download figure:
Standard image High-resolution imageIn addition to a non-homogeneous coupling matrix, the last case studied is the sparse coupling. The analytical part of this study was presented in section 3.4. As predicted, each uncoupled node implies a pair of eigenvalues that does not vary as function of p. Due to the nature of the Laplacian matrix, where just half of the eigenvalues vary with p, while the other half remain bounded, the set of bounded eigenvalues increases by one. For instance, if we have uncoupled nodes, the bounded part have eigenvalues, while the 'unbounded' part have eigenvalues. Note that the upper bound for the bounded set of eigenvalues is not anymore. Nevertheless, the general upper bound for seems to be also an upper bound for the sparse problem, as numerically verified. The figures for these experiments were not shown as they are visually similar to figure 1.
4.2. Supra-adjacency matrix
Similarly to the supra-Laplacian case, here we also begin with the simplest case, i.e., the diagonal homogeneous coupling, and increase the level of complexity considering heterogeneous inter-layer weights and sparsity. Thus, in the simplest case we have , therefore, the QEP, equation (5), is defined by the following coefficient matrices
Note that, in a way similar to the Laplacian, is also defined in terms of the aggregated network. On the other hand, the physical interpretation of is still difficult due to the product .
In section 4.2.1 we improve the bounds proposed in section 3.2. Then, in section 4.2.2, we evaluate the spectral properties of the supra-adjacency matrix as a function of p in three different contexts: (i) diagonal homogeneous coupling, (ii) diagonal heterogeneous coupling and (iii) sparse diagonal homogeneous coupling. Note that, in order to analyze the heterogeneous coupling we must consider the general QEP with the following coefficient matrices and .
4.2.1. Bounds
Similarly to the analysis performed for the supra-Laplacian, here we also extend the ideas presented in section 3.2 to the supra-adjacency matrix. First of all, regarding the diagonal heterogeneous coupling case, , we can also find the spectral distribution of the adjacency matrix. Beginning with matrix , we can bound it using its eigenvalues as
Interestingly, those are the eigenvalues of the aggregated network, which have a clear physical meaning. Similarly, for the discriminant we have
Finally, combining those bounds we can bound both solutions by
and
which asymptotically converge (as an approximation) to
In other words, we have a bimodal spectral distribution for the adjacency matrix, where half of the eigenvalues grow linearly with p, while the other half decrease with the same rate.
Figure 3 is an example of the eigenvalues of the supra-adjacency matrix and the calculated bounds as a function of the coupling parameter p for a multiplex network composed by two Erdös–Renyi layers. The network used in this example is the same as that used in the supra-Laplacian case.
4.2.2. Spectral properties as a function of the coupling parameter p
In its general form, the first derivative is given as
where x and yT are, respectively, the right and left eigenvectors associated with the eigenvalue λ. Firstly, regarding and using a similar approach as previously applied to the supra-Laplacian case. Thus, suppose that λ is a constant function of p or a function of pr with r < 1, however, it would give us , yielding to a contradiction. Then, suppose that it is a linear function of p, which implies , depending on the sign of the linear coefficient. Finally, suppose that it is a function of pr with r > 1 we obtain that , since the denominator grows faster than the numerator, which again is a contradiction. In this way, based on such analysis we conclude that the first derivative of λ can assume only .
Secondly, for the general case observe that both, the numerator and the denominator of equation (46) vary as a function of . Additionally, weights the product of the components of the eigenvectors, which allows the derivatives to assume more values, even a 'continuous bulk' instead of the bimodal distribution of the diagonal homogeneous case, similarly to the case discussed for the supra-Laplacian matrix. Here we also use the coupling matrix . We show the spectral evolution as a function of p for the non-homogeneous case in figure 4. Similarly to the Laplacian case, there are evidences that the bounds can be corrected using , hence
Here we can also obtain a similar conclusion as for the supra-Laplacian case. From figure 4 we observe that the corrected bounds are not as close to the homogeneous case.
Figure 4. The supra-adjacency eigenvalues, , varying the coupling parameter p, considering the coupling matrix , which is an heterogeneous coupling. The dashed and continuous red lines are the adapted upper and lower bounds, respectively. The network considered in this figure has the same layers as used in figure 1.
Download figure:
Standard image High-resolution imageFinally, we study the sparse coupling case, whose analytical evaluation was presented in section 3.4. As predicted, each uncoupled node imply in a pair of eigenvalues that does not vary as a function of p. In this way, for uncoupled nodes we have eigenvalues in the central part of the spectra that do not vary as a function of p. Next, grows linearly with p, while the other eigenvalues with −p. This is shown in figure 5. Note that in this figure the horizontal lines bounding the central part of the spectra are not calculated, but numerically obtained and are only shown to serve as a reference.
Figure 5. The supra-adjacency eigenvalues, , varying the coupling parameter p, considering a sparse coupling matrix. The dashed and continuous red lines are the adapted upper and lower bounds, respectively. The dotted line was obtained numerically considering the largest value of p and are shown just as a reference. The network considered in this figure has the same layers as used in figure 1.
Download figure:
Standard image High-resolution image4.3. Probability transition matrix
In this section, we evaluate the probability transition matrix, mainly focusing on its spectral properties as a function of the coupling parameter p. Due to the probabilistic nature of this matrix, we were not able to improve its bounds. Therefore, we mainly report numerical results.
Formally, the probability transition matrix is defined as
where , and X = { A, B} represents the label of each layer. It is known that this matrix models the classical random walk, where the walker chooses a neighbor based on the weights of its surrounding edges.
It is important to mention that in [23] the authors studied random walks on top of multiplex networks and analyzed them in terms of the normalized supra-Laplacian matrix. This matrix is defined as
where is the supra-Laplacian and is the probability transition matrix. Note that the normalized supra-Laplacian matrix is intimately related to the probability transition matrix. In fact, their spectra are trivially related. Furthermore, we can also relate the spectra of the normalized Laplacian as follows
where has the same set of eigenvalues as and if v is an eigenvector of , then is an eigenvector of associated with the same eigenvalue [20]. Note, however, that is symmetric [20]. In the context of random walks in multiplex networks, in [13] the author used the normalized supra-Laplacian matrix. Here, in this section, we will study , defined in equation (49).
Next, following our formalism, from equation (49), we can define our QEP in its monic form as
Note that such quadratic polynomial present some similarities with the one for the supra-adjacency matrix, however the probability transition matrix is not symmetric and the matrices presents a dependency on p. This fact, associated with the natural bound for stochastic matrices, make the derivation of the spectral bounds more complicated than the previous cases. Here we focus on the spectral properties of the probability transition matrix as a function of the coupling parameter p.
4.3.1. Spectral properties as a function of the coupling parameter p
For the sake of completeness, let us study the spectral properties of the transition matrix as a function of the coupling strength p. This exercise is much more of an example than a practical application since we already know that the spectra are bounded on stochastic matrices, which does not allow unbounded grow. In figure 6 we present the spectra as a function of the coupling parameter p. The first observation is that, aside from being bounded, the growth rate of the eigenvalues is quite different from what was observed for the Laplacian and adjacency cases.
Figure 6. The probability transition matrix eigenvalues, , varying the coupling parameter p. The network considered is the same as that considered in figure 1.
Download figure:
Standard image High-resolution imageFirstly, lets proceed with the analysis of equation (16) aiming for an approximation, which qualitatively describes the . First of all, the partial derivative of can be expressed as
where the term
Next, expanding the partial derivative of we have
All the expressions obtained so far are quite complicated to be analyzed in its exact form. Thus, we will proceed with an asymptotic analysis, aiming for a hypothesis of a possible formula that qualitatively describes the behavior of as a function of p. In other words, we propose a formula that fits the expected asymptotic behavior, but we also expect it to work for smaller values of p. We must remark that this analysis is an approximation and, in order to verify its validity we perform numerical fittings and evaluate the obtained errors.
From the previously mentioned perspectives, the asymptotic behavior are and, obviously, 8 . First of all, in figure 7 we present some examples of functions with different asymptotic behaviors. In (a) we present a function in O(1), showing that it can be bounded by a constant, in (b) we show some functions in O(p−1), while in (c) two functions, one in O(p−2) and the other in O(p−3). Note that we can approximate some of the terms in equations (55)–(57) to the functions in figure 7. In figure 7, we also show examples of our guessed asymptotic behavior for
Next, in figure 7(c) we show examples for
Note that we considered a single value of kx, without considering products between different constants. Besides, just one term is considered. We remark that the main goal of this exercise is to have insights on the qualitative behavior of more complicated functions, such as equation (57). In other words, the performed approximations are not expected to quantitatively predict those terms, but qualitative represent and 'catch' the main behavior of those functions.
Figure 7. Example of functions with different asymptotic behaviors. Functions in O(1) in (a), O(p−1) in (b) and in O(p−2) and O(p−3) in (c). In all plots we consider ka=1 and kb=100. Note that the asymptotic class does not change when we multiply the function by a constant.
Download figure:
Standard image High-resolution imageFirstly, recall that the spectra on stochastic matrices is bounded, which consequently restricts its derivatives. In other words, λi ∈ O(1). However, for the sake of the argument, let us suppose that , hence , where r is an integer. Thus, comparing with equation (16), we have
that can be rewritten as
which simplifies to
which implies that and r ≤ 0 since on the left-hand side we have a function in , while, on the right-hand side we have a function in . This simple analysis suggests that , for consistency. Note that we are not inferring anything regarding its 'velocity' (how fast it goes to zero). Such arguments reinforce that λi ∈ O(1), as previously mentioned. However, there are a huge class of functions that satisfies such restriction. In order to satisfy the so far established restrictions, let us suppose that
which is a function that satisfies our previous analysis. Thus, it also implies that
which yields . Next, from equation (16) we have
Note that it allows a set of possible solutions and, among them, it allows our initial supposition, equation (61), i.e., . Note that and , which is also in O(p−1), as expected from equation (64). Besides, for the sake of visualization, in figure 7(a) we show two examples of the leading term of equation (61).
Next, we proceed with a numerical experiment, extracting some eigenvalues presented in figure 6 we perform a fitting aiming to obtain the same curve. We chose 5 eigenvalues: (i) the leading eigenvalue, λ1 = 1, just as a reference and to emphasize that our proposed equation also works for that case, (ii) λ3, the first eigenvalue on the bulk (note that there can be a crossing between λ3 and λ2, which would change the index of the eigenvalue—here we are not going to enter into details of this possible crossing behavior and, in order to avoid that, we chose to follow the third eigenvalue), (iii) λN, the smallest eigenvalue and (iv) the two intermediate eigenvalues and , where we just considered their values after the spectra divides into two parts. Note that the error of these two curves is expected to be larger than the previous cases. It is important to remark that, as previously mentioned, there can be a crossing between λ3 and λ2, but here we are looking at the main global behavior and such a change would not be a big source of error. In this way, we are showing that there is a set of parameters that approximates the spectra using the proposed equations. The proposed mentioned experiment does not serve as a proof, but it does serve as an evidence of such, or a similar, behavior.
Following the proposed pipeline, firstly, in order to obtain the fittings, we used the nonlinear least squares method, the Levenberg–Marquardt algorithm [24–26] and the least absolute residual robust regression. Additionally, all the initial conditions were set to one. In figure 8 we show the obtained fittings and the numerically obtained eigenvalues. Complementary, in table 1 we present the fitted parameters. Interestingly, we observe that the proposed approximation fits really well the observed curves, which can be objectively measured by means of the sum of squares due to error, whose values are also reported in table 1. Thus, the behavior of λi assumed in equation (61) seems to be a very good guess. Besides, we also observe that there seems to be a symmetry on the obtained parameters for λ3 and λN, which are close. The only exception is k0, since both have the same modulus, but with a different sign, as expected. The last important observation also regards the parameter . Note that such a parameter is very close to one on all the fittings, suggesting some underlying property of our formulation.
Figure 8. The probability transition matrix eigenvalues, λi(p), varying the coupling parameter p, for i = N and i = 3 (or i = 2 after the crossing, in order to have a continuous curve). The network considered is the same as that considered in figure 1. The dots are the obtained eigenvalues from eigendecomposition of , while the continuous blue lines are fitted curves from equation (61), where we used just the first term on the summation, i.e., K = 1.
Download figure:
Standard image High-resolution imageTable 1. Parameter values of the network reported in figure 8. The confidence intervals are given in parenthesis and the goodness of fit is measured by the sum of squares due to error (SSE).
Eigenvalue | k0 | c0 | ck | SSE | |
---|---|---|---|---|---|
λ1 = 1 | 1 (1, 1) | <10−5 | <10−5 | <10−5 | <10−6 |
λ3 | 0.9999 (0.9999, 1) | 5.514 (5.502, 5.526) | 8.618 (8.579, 8.657) | 4.732 (4.712, 4.753) | < 10−4 |
λn−1 | 0.9985 (0.9975, 0.9996) | 11.27 (11.18, 11.36) | −2.141 (−2.629, −1.653) | −0.1541 (−0.1839,−0.124 2) | < 10−2 |
λn+1 | −0.9997 (−0.9999, −0.9995) | 0.1273 (0.1186, 0.1361) | 20.73 (20.61, 20.85) | 23.58 (23.53, 23.63) | <10−4 |
λN | −1 (−1, −1) | 5.344 (5.334, 5.353) | 8.444 (8.412, 8.477) | −4.615 (−4.632, −4.598) | < 10−4 |
Finally, for the sake of completeness and for comparison reasons, we numerically evaluate the spectra of the probability transition matrix for the sparse and heterogeneous coupling cases. Regarding the sparsity, in figure 9, we present a similar experiment as done for the supra-Laplacian and supra-adjacency cases. Similarly to those experiments, here we also observe a group of eigenvalues that do not change as a function of p. Moreover, we also verified that for decoupled nodes, we have eigenvalues that remain constant, validating the insights we obtained in section 3.4. Although the behavior observed is similar to the previously studied matrices, for the probability transition matrix we observe a slightly different behavior for intermediate values of p (here 1 < p < 10), where the intermediate eigenvalues change, forming the 'central bulk'.
Figure 9. The probability transition matrix eigenvalues, , varying the coupling parameter p. The coupling matrix is sparse. The network considered in this figure has the same layers as used in figure 1.
Download figure:
Standard image High-resolution imageFurthermore, we remark that for the heterogeneous coupling (), if compared with the supra-Laplacian and supra-adjacency, a completely different behavior emerged. In the probability transition matrix case, the spectra seem to be always bimodal. This effect is shown in figure 10, where, for a large enough value of p, the eigenvalues tend to a constant. It is also noteworthy that the rate at which this phenomenon takes place is much slower than the rate of the homogeneous case, shown in figure 6.
Figure 10. The probability transition matrix eigenvalues, , varying the coupling parameter p, considering the coupling matrix , i.e. an heterogeneous coupling. The network considered in this figure has the same layers as used in figure 1.
Download figure:
Standard image High-resolution image5. Discussion and conclusions
From the developed theory, we applied and analyzed three different matrices: (i) the supra-Laplacian, (ii) the supra-adjacency and (iii) the probability transition matrix. In all these cases we have considered three different coupling schemes: (a) diagonal homogeneous coupling, , (b) diagonal homogeneous sparse coupling and (c) diagonal heterogeneous coupling, . Regarding the supra-Laplacian and the supra-adjacency matrices, on the first scenario, (a), we were able to extract some analytical results regarding the derivatives of the eigenvalues, which suggested a different behavior for the other two cases, (b) and (c). On the other hand, regarding the probability transition matrix, due to its stochastic nature, we were not able to go further with the analytical analysis. However, we followed an asymptotic analysis, proposing a function that describes the eigenvalues behavior. This function was validated with numerical fittings of the original spectra. Although it is just an approximation, it also helps us understand the nature of the phenomena behind this structure. Furthermore, we also reported the differences between the spectral distributions for large p, where we can have bimodal, multi-modal or even a continuous bulk for the adjacency and Laplacian cases, just changing the coupling matrices. On the sparse case, this analysis was analytically supported, while the other cases were explored numerically.
Our analysis pointed out some important features about multiplex systems. As a general observation, as we increase p we will find (roughly) three different structural phases, which might take place at different points for each structure and matrix. Thus, the structural phases of a multiplex network can be defined as: (i) decoupled phase, for small values of p, where the layers are virtually decoupled and act by themselves, with a neglectable interaction, (ii) multiplex/multilayer phase, where the system is coupled and the intra-layer edges play an important role and (iii) a network of layers phase, where the structure of the network of layers plays the major role. Note that, from the perturbation theory point of view, in the decoupled phase the eigenvalues are basically the union of the eigenvalues of the individual layers plus some perturbations. On the other extreme, in the network of layers phase, the intra-layer edges might be understood as the perturbation, since p ≫ 1. In this case, we can interpret the system as a set of n virtually disconnected small networks, whose structure is given by the network of layers (considering a multiplex case, where each node has a counterpart on the other layers). Finally, the most interesting scenario is the multiplex phase, where the inter and intra-layer topologies play a fundamental role on the dynamics.
Throughout our analysis, we were able to verify these regimes in the different matrices we evaluated. Although all of them showed this behavior, the differences between those matrices are also evident. Considering the supra-Laplacian and supra-adjacency matrices with diagonal homogeneous coupling, we observe that, for a large enough p, the spectral distribution is bimodal, while for the sparse case we have three bulk's, where the central one results from the nodes that do not have an inter-layer edge. Finally, on the diagonal heterogeneous coupling, we observe a completely different behavior, where the eigenvalues are distributed into a single bulk. We remark that, although we were not able to analytically quantify this last phenomenon, our analysis suggested such a behavior.
Furthermore, comparing those results with the ones obtained using the probability transition matrix, we observed a completely different behavior. On the diagonal, homogeneous or heterogeneous cases, the spectra seem to be bimodal for a sufficiently large p. Note that for the homogeneous case this convergence to the bulks is much faster than the heterogeneous case. This is an interesting phenomenon since it contrasts with the supra-adjacency and supra-Laplacian cases, where the heterogeneous coupling implies a 'continuous' bulk. It is noteworthy that our predictions for a central bulk for the uncoupled nodes are also fulfilled for the probability transition matrix.
The analysis performed here emphasize the importance of a proper study of the structural phases in different contexts. The sparse and heterogeneous cases might change completely the spectra (depending on p). Obviously, the analysis should also take into account the correct matrix since the structural changes are different from case to case. In other words, different matrices present their phases in different intervals (values of p). In this context, dynamical insights can also be useful for a better understanding.
Since all the analyzed matrices are also related to dynamical processes, the results reported here will directly impact on these processes too. Note that in [22] the authors analyzed diffusion processes in multiplex networks and found the so-called superdiffusion. This process is described by the supra-Laplacian matrix and it is intrinsically connected to the so-called structural transition of this matrix as pointed in [22] and latter discussed in [18], where the authors found the exact structural transition point. Furthermore, in [7], while studying epidemic spreading in multiplex networks, the authors verified this structural behavior in the analysis of the supra-adjacency matrix. Besides, it was also shown that it is intimately related to spreading processes and the layer-localization phenomena [7]. Thus, in the mentioned cases, the dynamical regimes can be understood as a consequence of the structural changes.
In summary, we have proposed a new mathematical formalism for the analysis of spectral properties in multiplex networks using the polynomial eigenvalue problem. This approach reduces the dimensionality of our matrices (coefficient matrices) at the cost of a higher order of the characteristic polynomial. This technique might seem counterintuitive at a first glance, but it reveals an underlying relationship between the eigenvalues of the matrices associated to multiplex structures. In contrast to single-layer networks, multiplex networks are defined as matrix functions since we are interested in weighting inter and intra-layer edges differently. Thus, they depend on the coupling parameter, here denoted by p. Therefore, it is of utmost importance to derive spectral bounds as a function of p and to obtain insights on their asymptotic behavior.
In addition to the so far discussed results, we also must comment the implications of our results to single-layer networks and also in other contexts. We remark that, aside from the multiplex networks, we can also have other types of regularities allowing the polynomial to be reduced and analytically evaluated. For instance, more general multilayer networks or some specific community structured single-layer networks. In fact, any problem where the structure can be divided into groups and whose groups have some regularities allowing the matrices and to be calculated. Note that one might even argue that the groups do not need to be at the same size since one can add phantom nodes in order to mathematically allow the groups to have the same size. Thus, from the results of section 3.4, the analysis would follow naturally. We remark that in this scenario the analysis should be carried out with an extra care on the physical meaning of the newly added eigenvalues due to the phantom nodes. Furthermore, in the community structured single-layer networks case note that some of our results are conditioned to the scenario where intra and inter community edges are weighted differently, playing the role of our coupling parameter p. We might also suppose that our formalism can be applied in dynamical contexts when we have two different dynamical sets of nodes. For instance, the chimera states in arrays of identical oscillators, where we have two groups of nodes: coherent and incoherent [27]. Finally, the numerically obtained results might also be of interest, giving insights in many different fields ranging from the structural analysis of single-layered community structured networks, to the study of dynamical processes.
We hope this work motivates the community to study in further details the structural behavior of multiplex and multilayer systems and their dynamical consequences. Besides, other matrices and processes might also be studied and evaluated. In this case, we believe that our formalism might also be helpful. Finally, we also hope to motivate studies on the analysis of eigenvectors, which is still lacking in the literature and are of great importance as they were shown to play a major role in dynamical processes [7, 8].
Acknowledgments
FAR acknowledge CNPq (grant 307974/2013-8) and Fapesp (grant 2013/26416-9) for financial support. GFA acknowledges Fapesp for the sponsorship provided (grants 2012/25219-2). YM acknowledges partial support from the Government of Aragón, Spain through grant E36-17R, and by MINECO and FEDER funds (grant FIS2017-87519-P).
: Appendix. Jordan triple
The definitions presented in this section are not studied in details here since our goal is focused on applications, however, we refer the reader to [14, 15] for more information on this class of problems. Here we reproduce some important definitions that might me useful for some readers, allowing them to extend the results presented in this work to other contexts.
Definition A.1. Jordan triple [15]: Denoting by the Jordan matrix, is the Jordan triple of , where and are the right and left eigenvectors. We also have that is the Jordan triple of . Note that the Jordan matrix, , the diagonal blocks are the Jordan blocks. Besides, if all the eigenvalues are simple, is a diagonal matrix, where . Additionally, observe that , where is composed by the right eigenvectors, while , where is composed by its left eigenvectors. Finally, .
Definition A.2. The left set of eigenvectors [15]: The left eigenvectors can be defined in terms of the right eigenvectors and the matrix as
Definition A.3. Useful relations for the order two polynomial eigenvalue problem:
Footnotes
- 5
Note that it is necessary to restrict our coupling parameter to p > 0 since and the problem would not be well defined otherwise. Negative coupling parameters would also be possible, but they make less physical sense.
- 6
In addition, lets recall that if , and and are semi-positive matrices, with , then , where the eigenvalues are in descending order.
- 7
has identically spaced numbers and unitary average, due to the term , allowing the comparison with any other figure on this paper.
- 8
Observe that, formally, if f(x) = O(xr), then , where K is a constant and x ≥ x0. Thus, multiplying f(x) by a constant does not change its class. Besides, note that .