Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
Open AccessResearch articles

Fractal dimensions for iterated graph systems

Ziyu Neroli

Ziyu Neroli

Department of Mathematics, Huxley Building, Imperial College London, London SW7 2AZ, UK

z5222549@zmail.unsw.edu.au; ziyu.li21@imperial.ac.uk

Contribution: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review and editing

Google Scholar

Find this author on PubMed

    Abstract

    Building upon Li & Britz (Li NZ, Britz T. 2024 On the scale-freeness of random colored substitution networks. Proc. Amer. Math. Soc. 152, 1377–1389 (doi: 10.1090/proc/16604)), this study aims to introduce fractal geometry into graph theory and to establish a potential theoretical foundation for complex networks. Specifically, we employ the method of substitution to create and explore fractal-like graphs, termed iterated graph systems (IGS) for the first time. While the concept of substitution is commonplace in fractal geometry and dynamical systems, its analysis in the context of graph theory remains a nascent field. By delving into the properties of these systems, including distance and diameter, we derive two primary outcomes. Firstly, within the deterministic IGS, we establish that the Minkowski dimension and Hausdorff dimension align through explicit formulae. Secondly, in the case of random IGS, we demonstrate that almost every Gromov–Hausdorff scaling limit exhibits identical Minkowski and Hausdorff dimensions analytically by their Lyapunov exponents. The exploration of IGS holds the potential to unveil novel directions. These findings not only, mathematically, contribute to our understanding of the interplay between fractals and graphs, but also, physically, suggest promising avenues for applications for complex networks.

    1. Introduction and main results

    (a) Introduction

    Graph theory originated in the eighteenth century and saw rapid development throughout the twentieth century, while fractal geometry was proposed and explored during the twentieth century as well. Both fields have undergone rigorous mathematical development to this day. The integration of these two fields is generally associated with the study of complex networks [1]: this is primarily due to the work of Watts on small-world networks [2], Barabási’s discovery on scale-free networks [3] and Song’s exploration of fractal networks [4]. These remarkable works demonstrate certain distinctive patterns of real-world complex networks.

    However, the underlying mathematical principles of complex networks still remain unclear. Few attempts have been made to develop the theory: most relevant studies are based on real-world data and finite networks, rather than mathematical objects. Such a gap is to be filled, and it is of great interest to explore the behaviour of infinite graph fractals.

    To establish the fractals for graphs, we utilize the idea of substitution and iterated function systems (IFS), where these two concepts are more than common in dynamical systems and fractal geometry. It is also always natural to relate substitutions to dynamics because a natural source of substitutions is tiling and substitutive dynamical systems [58]. Since Hutchinson [9] set up the IFS strictly in a general framework, large numbers of notable works contribute to the studies of self-similar sets, for instance graph-directed systems [10,11], IFS with overlaps [1214] and random IFS [15,16]; see also [1721].

    Building on these foundations, we rigorously define the iterated graph system (IGS) by iteratively substituting the (coloured) arcs in a graph (randomly) with certain fixed graphs. However, the concept of the IGS is not entirely new; it evolves from the predecessor known as substitution networks, which were first introduced by Xi et al. [22], where fractality and scale-freeness are obtained. Li et al. [23,24] brought coloured graphs into this model and the average distance was studied by Ye et al. [25,26] in which the existence of average distance is exhibited under some circumstances. IGS can also be viewed both as the graph version of graph-directed fractals [10,27] and also as a generalization of them because the substitution process inherent to IGS includes a graph-directed methodology. We will discuss this issue in another work.

    Therefore, from a mathematical perspective, studying IGS is at least intriguing: we anticipate that fractals on graphs will exhibit unique behaviours not seen in Euclidean spaces. This is indeed the case, as evidenced by the specially defined degree dimension [1] for graph fractals. Physically, IGS provide a solid mathematical foundation for the study of complex networks, especially those with scale-freeness and fractality. The potential applications include considerations of fractal dimensions, degree distributions, average distance, random walks, diffusion process and percolation, among others. See a recent interesting work [28] by Eriksson-Bique et al. For instance, a quick corollary in this paper is that for any network characterized by a specific fractal dimension, it is possible to correspondingly design an iterated graph system that mirrors this dimension.

    (b) Main results

    The deterministic (or random) IGS roughly refers to the system generating infinite graphs (randomly) iteratively by independent coloured arc substitutions. Find figure 1 as an example. Rigorous definitions will be given in §2b and §4a.

    An example of a deterministic iterated graph system.

    Figure 1. An example of a deterministic iterated graph system.

    This paper mainly contains two conclusions, the first of which solves the problem left in Li et al. [24]:

    Theorem 1.1. Given any irreducible deterministic IGS and its Gromov–Hausdorff scaling limit , we have:

    Here, represents the spectral radius, and relevant definitions of and will be given in §2a,b, and the proof will be presented in §3 later.

    Another result is to construct random IGS and prove that the associated graph limits satisfy the following property.

    Theorem 1.2. Given any irreducible random IGS, for -almost all or equivalently:

    Here, is the Gromov–Hausdorff scaling limit of a stochastic process of graphs , and related definitions of , , , , and will be addressed in §4a, and the proof will be shown in §4e.

    The concept of IGS is straight and simple. However, the difficulty and novelty of this paper lie in the solution of a combinatorial matrix problem (§3a), estimation of diameter and distal (§3b) and analysis of random substitution (§4).

    2. Deterministic iterated graph systems

    (a) Minkowski dimension for graphs

    In this subsection, we will define the limit of a graph sequence and the Minkowski dimension, also known as the box-counting dimension, for graphs to facilitate the study of fractal-like graphs. With these foundational concepts established, we can proceed to analyse IGS.

    Definition 2.1. Let be a weakly connected graph containing no self-loops or multiple edges but with directions and uniform weights. To be specific, we say is a triple . is the set of vertices, and is the set of edges if they are undirected, or arcs if directed. is the weight or measure of arcs. In this paper, the weight of arcs in a graph is always uniform. Unless otherwise specified, the weight of an unscaled graph is always , that is given .

    In this paper, the distance between vertices and on a directed graph is also defined as

    where is the underlying undirected graph of . That is, regarding the distance we always omit the direction.

    Denote a simple path, omitting the direction of arcs, from vertex to vertex in by and denote the collection of all simple paths between and in paths by . If there is no ambiguity, we may abbreviate as and as in our notation.

    Define

    Let the diameter of be

    Definition 2.2. We now define the scaling graph of by

    Define the metric of by

    naturally constructs a compact metric space.

    Definition 2.3. Let be the Hausdorff distance between two subsets and on a metric space [ 29 ]. For any two metrics spaces and , we say is an admissible metric on if is metric space and and . Let be the Hausdorff metric induced by on . Define

    be the Gromov–Hausdorff distance in Gromov [30].

    Let be a graph sequence of finite graphs. is said to have a Gromov–Hausdorff scaling limit if there exists a metric space such that:

    It is evident that the Gromov–Hausdorff scaling limit associated with the metric space can contain infinitely many vertices and arcs.

    Example 2.1. An example of scaling graphs is given in figure 2, which shows the first four scaling graphs associated to the iterated graph system with as initial graph and with as the substitution rule graph. For simplicity, arcs are here drawn as undirected edges. As , the measure of each arc in is scaled to .

    An example of scaling graphs.

    Figure 2. An example of scaling graphs: Xi fractal in Xi et al. [22].

    Also refer to figure 3 for an illustration of represented as a complex network. This figure is generated using Python NetworkX.

    Xi fractal as a complex network.

    Figure 3. Xi fractal as a complex network.

    Definition 2.4. Let be a metric space. A family of sets , where is an indexed set, is said to be a -covering of if:

    Define the smallest number of -covering boxes for by:

    In this paper, we stipulate that -covering applies to , while -covering applies to .

    Definition 2.5. Let be a Gromov–Hausdorff scaling limit of some graph sequence. The Minkowski dimension is defined as:

    if such a limit exists. We say that a graph has the fractality property if and only if the Minkowski dimension of exists and is positive. In this case, we call a graph fractal.

    (b) Deterministic iterated graph systems

    Definition 2.6. See figure 4 for example.

    An example of deterministic IGS.

    Figure 4. An example of deterministic IGS.

    Let the number of colours be . The Iterated Graph System is a triple containing that:

    is a finite directed graph;

    , a family of directed graphs, is called seed graphs. For each seed graph, it must contain two planting vertices .

    The colour function is described by representing the colour of some arc in . The substitution operator is defined as:

    fixing

    That is, for an arc linking from vertex to , we substitute the arc with by and coinciding with and . In this paper, we require .

    We now define a discrete time dynamical systems by

    satisfying

    • for any .

    We call such an Iterated Graph System and the graph sequence substitution networks. Let be the Gromov–Hausdorff scaling limit of .

    Intuitively, to construct a sequence of growing graphs:

    • start from the initial graph , which is usually just a single arc (for instance a blue arc in figure 4);

    • we construct by replacing all -coloured arc in by . To ensure the substitution is unique, we fix of at the starting vertex of the arc and fix of at the ending vertex of the arc;

    • we then iteratively replace all -coloured arcs for all in to obtain . In this way, denote the graph after iterations by ; accordingly, we have a sequence of graphs called substitution networks. We will prove in §3d that for any iterated graph system , the Gromov–Hausdorff scaling limit always exists.

    (c) Relevant definitions

    Before strictly presenting the main results, several definitions are to be introduced in advance.

    Notation 2.1. For any -dimensional vector , write for each . Define to be the vector whose -th entry is the number of -coloured arcs in graph . For instance, in figure 1. Let denote that for all .

    Definition 2.7. Define the matrix with entries:

    Definition 2.8. Recall that a path in a graph is simple if it has no repeating vertices. In this paper, we assume that all paths are simple.

    Denote a path between vertices and in underlying (undirected) by , and:

    Consider the Cartesian product:

    Each element (or say choice) is a vector of length each of whose entries is in .

    Define, for each ,

    Let be the set of all matrices

    where for each or, equivalently, . Let be spectral radius, and denote the smallest spectral radius of all matrices in :

    Also, define:

    If and all matrices in are irreducible, we say the iterated graph system is irreducible.

    Theorem 2.1. (main theorem). Given any deterministic irreducible IGS and the associated Gromov–Hausdorff scaling limit , we have:

    Remark 2.1. For any network with a given algebraic number fractal dimension, we can always directly construct a deterministic iterated graph system that possesses the same box dimension.

    In the future, we may define an approximate dimension for finite graphs to address the convergence rate of dimensions.

    Example 2.2. Recall the iterated graph system of figure 4. For definition 2.7, we have

    with .

    For definition 2.8, there are paths in , say and , where path contains two blue edges and one red edge; that is , and where path contains zero blue and two red edges; that is . In , there is also two paths and . contains one blue edge and four red edges; that is . Similarly, . Hence,

    As a result, contains matrices, which are

    The smallest spectral radius of these matrices is and a matrix in with this spectral radius is

    As a conclusion, by theorem 2.1, we obtain that for the demonstrated example

    Modelling the network, we obtain the simulated values given in figure 6. It shows a simulation of the Minkowski dimension of , providing an estimated Minkowski dimension .

    See figure 5 for a fractal and scale-free complex network. It illustrates the model from figure 4 in the form of a complex network, created using Python’s NetworkX . According to previous conclusions, this complex network has an approximate Minkowski dimension of and a degree dimension of ( Figure 6). The degree dimension refers to the power law’s index of scale-freeness, as defined in Li & Britz [1], and it is calculated by:

    Iterated graph.

    Figure 5. Iterated graph system in figure 4 as a complex network.

    Test of fractality.

    Figure 6. Test of fractality when .

    We will later prove theorem 2.1 through the following two theorems.

    Theorem 2.2. We will prove in §3c that:

    Theorem 2.3. We will prove in §3e that:

    (d) Growth rates of arcs and vertices

    This section introduces several necessary tools and properties for the proof of fractality in §3. We will omit the proofs of such results that may be found in [1,24].

    Notation 2.2. is defined as asymptotic equivalence. We write if , where and may be . We may omit if no ambiguity occurs. Let be the -row -column entry of matrix .

    Lemma 2.1. Let be a irreducible non-negative matrix with spectral radius . Then, for any non-negative vector and for any ,

    where is a constant depending on and .

    Lemma 2.2. Let be a irreducible non-negative matrix with spectral radius . Then, for any non-negative vector ,

    Definition 2.9. Let be a non-negative matrix where

    where is only an -coloured arc.

    This means be the matrix whose entries equal the number of -coloured arcs that result from substituting arcs times, first by substituting an -coloured arc and then times substituting the subsequently resulting arcs of all colours.

    Lemma 2.3. For each , .

    Proof. By induction.∎

    Lemma 2.4..

    Proof. By . ∎

    Lemma 2.5. .

    Proof. Define:

    Then, . Notice all new vertices in are in fact generated by substituting arcs in . Hence, for all ,

    Given the initial graph ,

    Since the row vector only contains constants,

    Definition 2.10. The Gromov–Hausdorff scaling limit of graph sequence is a sparse graph if the graph density .

    Theorem 2.4. can only be a sparse graph.

    Proof.

    3. Proof of theorem 2.1

    This section is devoted to proving theorem 2.1. However, before we are to do so, a few short but technical results are required, especially theorems 3.2 and 3.3.

    (a) A combinatorial matrix problem

    Lemma 3.1. Suppose that, for any non-negative matrix and positive row vector , . Then .

    Proof. See lemma 8.1 in Li & Britz [1].∎

    Recall the defined notation from definition 2.8.

    Theorem 3.1. Suppose that all matrices in are irreducible, and let and . Let be the corresponding right column Perron–Frobenius vector of . Then,

    Proof. Assume that . Define to be the matrix obtained by replacing the -th row of by , and note that . Then for each ,

    whereas . Therefore,

    By lemma 3.1,

    contradicting the definition of .∎

    Next is our second auxiliary result.

    Theorem 3.2. For any number of matrices ,

    Proof. For each and , the row vector satisfies by theorem 3.1, so

    Then,

    so

    By Collatz–Wielandt formula,

    Remark 3.1. If is not defined the same way as definition 2.8, then the theorem does not hold. A straight example is given by:

    with . Then, .

    However,

    is the family satisfying theorem 3.2.

    The inherent property of this matrix family fundamentally ensures the existence of the Gromov–Hausdorff scaling limit .

    (b) Distance growth between vertices

    In Li et al. [24], the authors provided an analysis of the distances between vertices in deterministic IGS, and it was proved that these distances grew with bounded growth rates. However, a feasible formula was not given to calculate these rates; they are indeed difficult to determine.

    To illustrate why these distances are difficult to estimate, consider figure 7 in which a deterministic iterated graph system is shown, with just two substitution rules: one for blue arcs and one for red arcs. In the shown graph sequence , the distance between the initial vertices and is 1. In , this distance is 2 and is the length of the path . However, substituting the arcs of gives paths of length 10 between and ; this is larger than 9, the distance between and . Rather, the shortest paths between and , such as , here arise disjointly and independently from the arcs of .

    An example of deterministic IGS.

    Figure 7. An example of deterministic IGS.

    In general, the shortest paths between vertices and in each graph and the shortest paths between vertices and in may be greatly irrelevant of each other. Nevertheless, it is possible to investigate how distances develop during the substitution process. For this purpose, consider the two vertices and in and let and also denote these two fixed vertices in each subsequent graph , as indicated in figure 7. Let denote a shortest path between and in .

    Definition 3.1. In this paper, we without loss of generality let be a single arc, and . A path in is uniform if it arises by successive arc substitutions from paths in , respectively, such that, for each and each colour , all arcs of colour in when substituted are in replaced by the same path between to in each for all . For instance, the path in figure 7 is uniform.

    Lemma 3.2. For each , at least one shortest path is uniform.

    Proof. For , the lemma is trivially true, so assume that some shortest path in is uniform. Since can be obtained by substituting every -coloured arc in by some , each shortest path in is obtained by concatenating a sequence of paths in different . By choosing the same path for each -coloured arc, the resulting shortest path will be uniform. Induction completes the proof.∎

    Now, we focus on calculating the lengths of uniform paths recursively. Recall from definition 2.8 that is the set of subgraphs of whose underlying (undirected) graph is a path between and . Each element is of the Cartesian product

    Note that if is a uniform path between and , then the subsequent uniform paths arising from are obtained by choosing an element and replacing each -coloured edge of by the value of each element in . For each tuple,

    define to be the matrix,

    Note that by definition 2.8.

    Lemma 3.3.

    Proof. Each given tuple determines a unique expected uniform path based on the given uniform path . To determine the length of explicitly, recall the notation from definition 2.8. In particular, and,

    If we fix , then

    As at least one shortest path is proved to be uniform,

    Theorem 3.3.

    Proof. By lemma 3.3, lemma 2.1 and theorem 3.2,

    (c) Proof of theorem 2.2

    Definition 3.2. Let be the maximum number of node-disjoint boxes of with diameter . That is, is the packing number.

    Lemma 3.4. .

    Proof. Choose vertices whose pairwise distances are greater than . Since no subgraph of diameter at most contains more than one of the covering boxes, .∎

    Notation 3.1. Let denote for some given constant . Similar for . Recall that denotes the diameter of graph .

    Theorem 3.4.

    Proof. Let denote:

    Note that if , then , so

    Observe that,

    Let . Then,

    Note that here, in fact refers to various shortest paths given different initial graphs. However, theorem 3.3 sheds light on this problem: all are asymptotically equivalent without regard to some constant times. Hence, we say the right-hand side above:

    By lemma 2.1, , as a result

    We now prove theorem 2.2.

    Proof. By lemma 3.4,

    Set . We can easily find a finite constant depending on random IGS only such that in that every arc in turns into a in . Also, since a constant number of subgraphs with diameter can cover . Therefore,

    Recall theorem 2.4, providing that for ,

    Then,

    (d) Existence of the Gromov–Hausdorff scaling limit for iterated graph systems

    This section aims to prove the existence of the Gromov–Hausdorff scaling limit for IGS.

    Theorem 3.5. There exists for the Gromov–Hausdorff scaling limit of .

    Proof. Let . Firstly, we show a significant fact that for any , there exists such that:

    This is because

    where is given by theorem 3.3 and is given by theorem 3.4.

    We now prove that constructs a metric space. As discussed above, for and clearly . As is metric space for , then for ,

    Therefore, we trickily consider the metric space and note that . We now construct a for nested set by, for , , and :

    Given that is a nested set, the construction is used to show that by theorem 3.4:

    Finally,

    Writing , we obtain the desired result.∎

    A dynamical system on metric space under mapping is said to be distal if for any , .

    Corollary 3.1. and under are distal.

    A topological space is said to be (perfectly normal Hausdorff) if it is normal and every closed subset is a subset. A straight fact is that every metrizable space is perfectly normal Hausdorff.

    Corollary 3.2. is a space.

    (e) Proof of theorem 2.3

    This subsection aims to demonstrate a result for the Hausdorff dimension of . It can be done by showing is quasi-self-similar and then applying Falconer’s implicit theorem [31]. However, since this subsection will define a slightly different Hausdorff dimension for graphs, we are to present a full proof for the desired result and also a proof of quasi-self-similarity.

    There is a substantial amount of literature on the equivalence of box dimension and Hausdorff dimension, particularly in the context of invariant measures in various iterated systems. For instance, see [9,1820,28].

    Federer and Rogers [32,33] have sufficiently investigated Hausdorff measure and dimension on metric spaces. We now introduce a definition of Hausdorff dimension for graphs.

    Definition 3.3. Define Vertex Hausdorff measure for graph :

    Then, set . Here is a -cover for if

    Define Edge Hausdorff measure for graphs that,

    Then, set . Here is a -cover for if

    Let the Hausdorff dimension for graphs be:

    Remark 3.2. There is no inequality between and .

    An example is exactly IGS. Choosing any , there is always . However, is countable while is uncountable. Hence, we have , but uncountably many arcs imply that . Though this is very much a straight fact, it is occasionally called the paradox of the binary tree.

    Theorem 3.6.

    Proof. By theorem 2.2,

    By remark 3.2, we have and

    It is trivial to check that

    Hence, we only need to calculate the lower bound of the Hausdorff dimension.

    Fix and consider . Let be a -cover of . We define:

    Accordingly, as theorem 3.3 points out the growth of , we can also have:

    Picking any node , let

    Then, it is clear that and thus . Given that , we can always find a constant (not depending on ) such that is a subgraph of some . This is because substitution networks can be reckoned as a composition of many (if is large enough).

    As a result,

    and hence . Therefore,

    by lemma 3.3 and theorem 3.4.

    Finally, because is a covering, by lemma 2.5,

    Hence,

    which completes the proof of theorem 2.1.∎

    Definition 3.4. A metric space is said to be quasi-self-similar if there exists such that for any with being a ball, there is a mapping such that for :

    Theorem 3.7. is quasi-self-similar.

    Proof. Recall and .

    Let and . By the property of dynamical systems, any must be a subgraph generated by some arc in with , say . Hence, we only need to define the mapping from the subgraph generated by . Without loss of generality, let be the graph generated by .

    By theorem 3.4, we have:

    (3.1)

    Based on has -coloured arc, we can define a natural bijection:

    because these two graphs are exactly the same for each . As a result, for any , we have by theorem 3.3

    (3.2)

    Also by theorem 3.3,

    (3.3)

    In light of equations (3.1)(3.3), we obtain:

    Corollary 3.3. By [ 31 ], we have .

    4. Random iterated graph systems and proof of theorem 4.1

    (a) Random iterated graph systems

    Definition 4.1. See figure 8 for example.

    An example of random IGS.

    Figure 8. An example of random IGS.

    Let be the number of colours. is the number of seed graphs for colour where . The Random Iterated Graph System is a quadruple satisfying

    is a finite directed graph;

    where are called seed graphs. In each seed graph , it must contain two planting vertices ;

    is a discrete probability measure for each such that . Hence, is a probability vector and is a family of probability vectors.

    Let be a colour function. A random substitution operator is defined as

    fixing with probability ,

    where and coincide with the starting vertex and ending vertex of respectively. In this paper, we require .

    We now define a discrete time stochastic process of graphs by:

    with probability if such arc is substituted with in .

    Let be the collection of all possible . The discrete probability space is given by being a filtration on sigma-algebra and being a discrete probability measure. To be specific,

    We call the stochastic process of graphs random substitution networks . Let be the Gromov–Hausdorff scaling limit of some stochastic process and denote the collection of all possible by .

    By Kolmogorov extension theorem, there exists a unique probability measure in . In this paper, we focus on the properties of sample space with the extended probability measure defined above.

    See figure 8 for example. For each , set positive integers , which is the number of possible rule graphs of colour . Each directed seed graph has a vertex and a vertex that, respectively, replace the beginning node and ending node of ; this will determine exactly how replaces . In addition, in this paper, we always require for all and .

    Intuitively, to construct a sequence of growing graphs,

    start from the initial graph , which is usually just a single arc;

    we construct by replacing all -coloured arcs in randomly by a graph in , say with probability . Note here, as stated above, that the substitutions regarding node and are unique;

    we then iteratively replace all -coloured arcs for all to obtain . In this way, denote the graph after iterations by ; accordingly, we have a stochastic process of graphs called random substitution networks.

    (b) Relevant definitions

    Definition 4.2. A finite set containing irreducible matrices with a probability vector is called a random matrices ensemble [34].

    Definition 4.3. Let

    be with . Clearly , and hence is a random matrices ensemble with elements.

    Definition 4.4. Let be a path between and in , and be the collection of all possible in . For convenience, we will here write and simply as and .

    By the definition, is a set containing matrices with:

    In this way, is a random matrices ensemble.

    Let

    In other words, is a collection of all possible random matrices ensembles . Moreover,

    If all matrices in and are irreducible, we say the random iterated graph system is irreducible.

    Notation 4.1. Let be the (maximal) Lyapunov exponent of random matrices ensemble with probability vector. We will strictly define in §4c.

    Theorem 4.1. (main theorem). Given any random irreducible iterated graph system, for -almost all in , or equivalently:

    Remark 4.1. For any network with a given algebraic number fractal dimension, we can always directly construct a random iterated graph system that possesses the same dimension.

    Example 4.1.

    respectively, with the probability vector . The Lyapunov exponent:

    Hence, in this example, contains random matrices ensembles.

    all with the probability vector . The Lyapunov exponents of elements shown in above are , respectively. Finding the smallest Lyapunov exponent in , it is .

    By theorem 4.1, we have the theoretical dimension:

    Similarly modelling the limit graph, we obtain the simulated values given in figure 9 . It shows 10 times of simulating Minkowski dimensions of , providing an estimated Minkowski dimension .

    Ten simulations of fractality when n = 5.

    Figure 9. Ten simulations of fractality when .

    (c) Lyapunov exponents and stochastic substitution systems

    This subsection is devoted to studying the stochastic behaviours of random IGS. We summarize the process of random substitution on graph fractals to so-called stochastic substitution systems, and we prove some properties of the associated Lypunov exponents. Therefore, we shall start by introducing Lyapunov exponents.

    Lyapunov exponents regarding products of random matrices were first studied by Bellman [35]. Furstenberg and Kesten [36,37] then developed a couple of famous results of the asymptotic behaviours for products of random matrices. Recently, Pollicott [38,39] presented several impressive works on effective numerical estimates of Lyapunov exponents.

    We will continue to use the notations in Pollicott [38] for Lyapunov exponents. Let be a random irreducible and invertible matrices set with probability vector . Consider the sequence space , and be the corresponding Bernoulli measure on such space. Choosing , the Lyapunov exponent of products of random matrices is defined as:

    (4.1)

    Furstenberg & Kesten [36] develop that if is a stationary stochastic process, for almost every , there is

    (4.2)

    for any entry. In this subsection, is valid for any norm.

    In this paper, we will discuss a type of slightly different matrix product, but it will be proved the Lyapunov exponent of such a system coincides with the associated products of random matrices.

    Definition 4.5. Let be a random matrices ensemble with probability vector . For each , let be the -th standard basis unit vector of .

    Define a random operator . Define identical and independent , each with probability function:

    Here, the random operator randomly maps a basis unit vector to some vector by a certain probability distribution. Consider any non-negative vector , and define the random operator by:

    Let be a given non-negative vector with integer entries and define a sequence of stochastic vectors by the iterations:

    We call such the triple a stochastic substitution system. Let be a probability space with being a sample space.

    Definition 4.6. Let be a stochastic substitution system. The Lyapunov exponent of is defined by:

    where is the expectation.

    Theorem 4.2. Let be a stochastic substitution system. Then,

    Proof. First, is a non-negative row vector. By formula (4.2), we have:

    Because are all independent, it is found

    Finally, by the above,

    Theorem 4.3. For -almost all

    Proof. By formula (4.2) and theorem 4.2.∎

    (d) Growths of arcs and vertices

    Lemma 4.1. -almost surely, .

    Proof. is clearly a stochastic substitution system because every arc substitution is independent and associated with some matrix in . Then by theorem 4.3, we obtain the desired result.∎

    Lemma 4.2. -almost surely, .

    Proof. This proof is of the form of lemma 2.5, by lemma 4.1 providing that almost surely .∎

    Theorem 4.4. is always a sparse graph.

    Proof. For all possible , there are always

    Remark 4.2. Theorem 4.4 holds even if all graphs in are complete graphs.

    (e) Proof of theorem 4.1

    For notational convenience, write

    Lemma 4.3. .

    Proof. For a certain , we verify that is a stochastic substitution system. This is because for any -coloured arc in , we can always fix one and the path generated by these paths in is also uniform.

    By lemma 3.2, there exists one such that is almost surely the shortest. In other words, for every rule graph , even though there are (one or) many possible paths between and , lemma 3.2 always guarantees that an optimization can be achieved if we only pick one particular path between and from each .

    Hence, the paths between and in all possess growth rate . Among them, the shortest path should follow the growth rate .

    Lemma 4.4. .

    Proof. The analogous proof is established like theorem 3.4, by lemma 4.3.∎

    We are now able to prove theorem 4.1.

    Proof. We start by verifying the Minkowski dimension. By lemma 4.4, the growth of diameter of is almost surely . It is feasible that we use the tricks in the proof of theorem 2.2 in §3c. Similarly, we obtain that holds for almost every . This suffices to have with probability measure .

    In the sense of the Hausdorff dimension, we also adopt the definition in §3e. With the same techniques to estimate the mass distribution, the result boils down to, -almost surely, .

    Consequently, for -almost all ,

    which completes the final proof.∎

    Data accessibility

    This article has no additional data.

    Declaration of AI use

    I have not used AI-assisted technologies in creating this article.

    Authors’ contributions

    Z.N.: conceptualization, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, resources, software, validation, visualization, writing—original draft, writing—review and editing.

    Conflict of interest declaration

    I declare I have no competing interests.

    Funding

    This work was supported by the Additional Funding Programme for Mathematical Sciences, delivered by EPSRC (EP/V521917/1) and the Heilbronn Institute for Mathematical Research. This work was also supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1).

    Acknowledgements

    The author would like to thank his mom Birong Lin for her persistent support. The author would like to thank Thomas Britz and the anonymous reviewers for their precious comments. The author would like to thank Mark Pollicott for his profound suggestions.

    Footnotes

    Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.

    Recommended articles