Supplementary Information

Transcrição

Supplementary Information
Supplementary Information
Mae Woods, Miriam Leon, Ruben Perez and Chris P. Barnes
Contents
1 Supplementary Methods
1
1.1
Bayesian inference and model selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Approximate Bayesian Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2.1
Parameter inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2.2
Model selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.2.3
Variable selection ABC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.2.4
Kernels for the indicator variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.3
Functional regions, Bayesian integrals and occams razor . . . . . . . . . . . . . . . . . . . . . . .
5
1.4
Biochemical modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.5
Precomputing the prior over networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
1.5.1
General three node network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
1.5.2
General four node network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Supplementary Figures
1
1.1
10
Supplementary Methods
Bayesian inference and model selection
Let θ ∈ Θ be a parameter vector with prior π(θ) and f (y|θ) be the likelihood of the data y ∈ D. In Bayesian
inference we are interested in the posterior density
f (y|θ)π(θ)
.
f (y|θ)π(θ)dθ
Θ
π(θ|y) = R
Comparison of a discrete set of models can be be performed using the marginal posterior. Suppose we have a
set of competing models M ∈ M = {M1 , M2 ..Mq }. Consider the joint space defined by (M, θ) ∈ M × ΘM ;
Bayes theorem can then be written
π(M |y) = R
f (y|M )π(M )
f (y|M )π(M )
=P
,
0
0
0 )π(M 0 )dM 0
f
(y|M
M f (y|M )π(M )
M
1
where f (y|M ), the marginal likelihood, can be written
Z
f (y|M ) =
π(θ|M )f (y|θ, M )dθ.
ΘM
Therefore the posterior probability of a model is given by the normalized marginal likelihood which may or may
not be weighted depending on whether the prior over models is informative or uniform respectively.
1.2
1.2.1
Approximate Bayesian Computation
Parameter inference
Consider the case where we cannot write down the likelihood in closed form but we can simulate from the data
generating model. We can proceed by first sampling a parameter vector from the prior, θ∗ ∼ π(θ), and then
sampling a data vector, x∗ , from the model conditional on θ∗ , ie x∗ ∼ f (x|θ∗ ). This alone gives the joint density
π(θ, x). To obtain samples from the posterior distribution we must condition on the data y and this is done via
an indicator function, i.e.
π(θ)f (x|θ)IAy (x)
,
π(θ, x|y) = R
π(θ)f (x|θ)dxdθ
Ay ×Θ
where IB (z) denotes the indicator function and is equal to 1 for z ∈ B. Here Ay = {x ∈ D : x = y}, so the
indicator is equal to one when the simulated data and the observed data are identical. This forms a rejection
algorithm, and in this instance the accepted θ∗ are from the true posterior density π(θ|y).
For most models it is impossible to achieve simulations with outputs in the subset Ay and so an approximation
must be made. This is the basis for ABC. In the first instance we can replace Ay by Ay, = {x ∈ D : ρ(x, y) ≤ }
where ρ : D × D → R+ is a distance function comparing the simulated data to the observed data. We then have
π(θ)f (x|θ)IAy, (x)
,
π(θ)f (x|θ)dxdθ
Ay, ×Θ
π (θ, x|y) = R
where π is an approximation to the true posterior distribution. The rationale behind ABC is that if is small
then the resulting approximate posterior, π , is close to the true posterior. We often write the marginal posterior
distribution as π(θ|ρ(x∗ , y) ≤ ).
Often, for complex stochastic systems, the subset Ay, is still too restrictive. In these cases we can resort to
comparisons of summary statistics. We now specify the subset Ay,η, = {x ∈ D : ρS (x, y) ≤ } where η : D → S
is a summary statistic and the distance function now takes the form ρS : S × S → R+ .
The simplest algorithm is known as the ABC rejection algorithm [1] and proceeds as follows
R1
R2
R3
R4
Sample θ∗ from π(θ).
Simulate a dataset x∗ from f (x|θ∗ ).
If ρ(x∗ , y) ≤ accept θ∗ , otherwise reject.
Return to R1.
This gives draws from π but can be very inefficient in high dimensional models or when the overlap between
the prior and posterior distributions is small. One way to improve the efficiency of the rejection algorithm is
to perform sequential importance sampling (SIS) [2]. In SIS, instead of sampling directly from the posterior
distribution, sampling proceeds via a series of intermediate distributions. The importance distribution at each
stage is constructed from a perturbed version of the previous population. This approach can be used in ABC
and the resultant algorithm is known as ABC SMC [3]. Described here is a slightly modified version that
automatically calculates the schedule and as such, only the final value, T , needs be specified. To obtain N
samples {θ1 , θ2 , θ3 ...., θN } (known as particles) from the posterior, defined as, π(θ|ρ(x∗ , y) ≤ T ), proceed as
follows
2
S1
S2.0
S2.1
S2.2
Initialize = ∞
Set the population indicator t = 0
Set the particle indicator i = 1
If t = 0, sample θ∗∗ independently from π(θ)
i
If t > 0, sample θ∗ from the previous population {θt−1
} with weights wt−1 .
∗∗
∗
Perturb the particle, θ ∼ Kt (θ|θ ) where Kt is the perturbation kernel.
If π(θ∗∗ ) = 0, return to S2.1
Simulate a candidate dataset x∗ ∼ f (x|θ∗∗ ).
If ρ(x∗ , y) > return to S2.1
Set θti(= θ∗∗ and dit = ρ(x∗ , y), calculate the weight as
1
if t = 0
wti =
π(θti )
PN
if t > 0
wj K (θ i |θ j )
j=1
S3.0
t−1
t
t
t−1
If i < N , set i = i + 1, go to S2.1
Normalize the weights.
Determine such that P r(dt ≤ ) = α.
If > T , set t = t + 1, go to S2.0.
Here Kt (θ|θ∗ ) is the component-wise random walk perturbation kernel that, in this study, takes the form
Kt (θ∗ |θ) = θ + U (−δ, δ) where δ = 12 range{θt−1 }. The denominator in the weight calculation can be seen as the
probability of observing the current particle given the previous population. The value α is the quantile of the
distance distribution from which to choose the next . This is set conservatively (α = 0.3) since taking small
steps in the distance schedule can lead to the algorithm getting stuck in local minima [4].
1.2.2
Model selection
Model selection can be incorporated into the ABC framework by introducing the model indicator M and proceeding with inference on the joint space. For example, the ABC rejection algorithm with model selection [5]
proceeds as follows
MR1
MR2
MR3
MR4
MR5
Sample M ∗ from π(M ).
Sample θ∗ from π(θ|M ∗ ).
Simulate a dataset x∗ from f (x|θ∗ , M ∗ ).
If ρ(x∗ , y) ≤ accept (M ∗ , θ∗ ), otherwise reject.
Return to R1.
Once N samples have been accepted an approximation to the marginal posterior probability, P (M = m|y),
is given by
#accepted m
.
N
P (M = m|y) =
Model selection can also be incorporated into the ABC SMC algorithm described above [3, 6]. It has been noted
that using summary statistics for model selection can be problematic due to information loss [7, 8]. This can
give rise to a discrepancy between the Bayes factor calculated from the full data and the summary statistics,
unless the summaries are sufficient for the joint space, {M, θ}. We construct the problem in terms of summary
statistics of the time course data and define the posterior in terms of the summaries. How model selection on
the summaries maps to model selection on any individual time course is irrelevant here.
1.2.3
Variable selection ABC
The algorithm presented here makes a simplification to allow for the traversal of large model spaces. We assume
that all models are nested within one larger model and that the candidate models share the same parameter
3
space. In terms of model selection we are making the following approximation of the joint space
π(θ, M ) = π(θ|M )π(M ) ≈ π(θ)π(M ).
This allows for the separation of model and parameter space and transforms the model selection problem into
a variable selection one. The main disadvantages of this approach is that model specific parameter spaces (and
also priors and perturbation kernels) cannot be constructed.
In order to implement this approach we introduce nI model indicators Ik , k ∈ {1, 2, ..nI } which are integer valued
parameters Ik ∈ Z. Here we will assume that each indicator takes three values, Z = {−1, 0, +1} (representing
repression, missing and activating respectively), although this is not a requirement and could be made more
general. Each Ik has a prior specified by a discrete uniform distribution (no particular interaction is favoured)
and a perturbation kernel, Kt,I (Ik∗ |Ik ) specified by a transition matrix (see next subsection).
Particular models are specified by combinations of the indicator variables. Let I = (I1 , I2 , ...Ik )T be the vector
of indicator variables and z = (z1 , z2 ..zk )T be the vector of their integer values corresponding to a particular
network structure. We can obtain the posterior probability of a particular network, P r(z), by summing the
importance weights for all those particles with indicators I1 = z1 , I2 = z2 , ....InI = znI , or I = z. Therefore we
can write
P r(z) =
N
X
wi (I1i = z1 , I2i = z2 , ..., Ini I = znI , θi )
i=1
=
N
X
wi (I i , θi )Iz (I i ),
i=1
where I denotes the indicator function. This allows the algorithm to explore large numbers of nested models.
For example if k = 6 and Z = {−1, 0, +1} the algorithm searches 36 = 729 nested models. If k = 9 and
Z = {−1, 0, +1} the algorithm searches 39 = 19683 nested models.
The modified algorithm for large model space traversal is given below.
To obtain N samples {(I, θ)1 , (I, θ)2 , (I, θ)3 ...., (I, θ)N } from the posterior, defined as, π(I, θ|ρ(x∗ , y) ≤ T ),
proceed as follows
VS1
VS2.0
VS2.1
VS2.2
Initialize = ∞
Set the population indicator t = 0
Set the particle indicator i = 1
If t = 0, sample (I ∗∗ , θ∗∗ ) from the prior π(I, θ) = π(I)π(θ).
If t > 0, sample (I ∗ , θ∗ ) from the previous population {(I, θ)it−1 } with weights wt−1 .
Perturb the particle, (I ∗∗ , θ∗∗ ) ∼ Kt,θ (θ|θ∗ )Kt,I (I|I ∗ ) where Kt and Kt,I are the perturbation kernels.
If π(I ∗∗ , θ∗∗ ) = 0, return to MS2.1
Simulate a candidate dataset x∗ ∼ f (x|I ∗∗ , θ∗∗ ).
If ρ(x∗ , y) > return to MS2.1
Set (I,(θ)it = (I ∗∗ , θ∗∗ ) and dit = ρ(x∗ , y), calculate the weight as
1
if t = 0
wti =
π(θti )π(Iti )
PN
if t > 0
wj K (θ i |θ j )K (I i |I j )
j=1
VS3.0
VS END
t−1
t
t
t−1
t,I
t
t−1
If i < N , set i = i + 1, go to LMS2.1
Normalize the weights.
Determine such that P r(dt ≤ ) = α.
If > T , set t = t + 1, go to LMS2.0.
Denote the final population by T
Obtain the marginal model probabilities given by
PN
P r(z) = i=1 wTi (ITi , θTi )Iz (ITi )
4
Finally we can also define the inclusion probability of any single edge within the network by
P r(Ik = zk |.) =
N
X
wi (I i , θi )Izk (Iki ).
i=1
1.2.4
Kernels for the indicator variables
In principle the kernels for the indicator variables can be specified in
be defined through a transition matrix of the form

1 − p p/2
P r(Ik∗ = j|Ik = i) =  p/2 1 − p
p/2
p/2
number of ways. A dependent kernel can

p/2
p/2  ,
1−p
where p is constant fixed heuristically to give good mixing properties. In the original ABC SMC algorithm
general a value p = 0.7 was used. We also defined an adaptive independent kernel such that the perturbation
uses information from the previous population. A simple adaptive kernel is given by the transition matrix


p−1 p0 p1
P r(Ik∗ = j|Ik = i) = p−1 p0 p1  .
p−1 p0 p1
The values within the transition matrix are given by
p−1 ∝ P r(Ik = −1) + c/3
p0 ∝ P r(Ik = 0) + c/3
p1 ∝ P r(Ik = −1) + c/3
where the empirical values, P r(Ik = −1), P r(Ik = 0), P r(Ik = −1) are calculated from the previous population.
The value c is a regularisation constant and effectively enforces the kernel to be a mixture of the empirical
probability distribution and a uniform distribution, which ensures that while the algorithm adapts to the models
with highest posterior probability there is always a probability of switching out of the current model. In the main
study this kernel is used with c = 1, which is very conservative, and ensures good mixing properties. There
are alternative ways to formulate dependent and adaptive kernels using multinomial regression [9], however
since here we are dealing with a small number of indicator variables these methods were explored but deemed
unnecessary.
1.3
Functional regions, Bayesian integrals and occams razor
To more clearly explain the relationship between robustness, model evidence (marginal likelihood) and how
complexity is automatically accounted for, it is instructive to a very simple toy scenario depicted in Figure 1. We
assume that the prior, P, is uniformly distributed and is depicted by the blue regions in Supplementary Figure 1.
We also assume that a small fraction of parameter space is functional (depicted in red in Supplementary Figure
1) ; that is if the parameters lie within this region then the system exhibits the desired dynamical behaviour
(oscillations in this case). Within the functional region, F, the likelihood f (O|θ) is equal to one, otherwise it is
zero
f (O|θ) = 1 if θ ∈ F
= 0 otherwise.
5
A
B
θ2
θ1
b
a
b
b
θ1
θ3
θ2
Figure 1: . Examples of two (A) and three (B) dimensional parameter spaces. The prior is depicted by the blue
regions and the functional region is depicted in red
In addition we can define notation for the size of the prior and functional regions as |P| and |F|. With these
simplifying assumptions. the model evidence or robustness in the two dimensional case is given by
Z
p(O) = f (O|θ)p(θ)dθ
Z
= f (O|θ)p(θ1 )p(θ2 )dθ1 dθ2
Z
1
dθ1 dθ2
= 2
b F
|F|
=
,
|P|
which is simply the ratio of the functional region to the prior region. When examining two systems of the same
complexity (equal number of parameters) and with identical prior distributions, then the Bayes factor is simply
the ratio of the sizes of the functional regions
R
f (O|θ, M1 )p(θ, M1 )dθ
BF12 = R
f (O|θ, M2 )p(θ, M2 )dθ
|F1 | |F2 |
=
/
|P1 | |P2 |
|F1 |
=
.
|F2 |
which make sense intuitively. It is straightforward to see that these relationships hold in any number of
dimensions.
Now imagine the case where a parameter is added and we transition from R2 to R3 space. The functional region
remains invariant in the θ1 , θ2 plane and extends an amount a in the θ3 direction (Supplementary Figure 1B).
6
The marginal likelihood becomes
Z
p(O) =
=
f (O|θ)p(θ1 )p(θ2 )p(θ3 )dθ1 dθ2 dθ3
|F 2 | a
,
|P 2 | b
where we have used the notation F 2 and P 2 to label the two dimensional region (θ1 , θ2 plane) of F and P
respectively. Interestingly, if θ3 is completely uninformative, that is a = b, then the robustness is identical to
the two dimensional case. This is also intuitive since adding a parameter that does nothing should not change
the calculated robustness. This does highlight a possible issue; models with uninformative parameters are not
penalised since they all receive the same robustness. This might be problematic in network inference since one
could end up with networks with spurious edges. However in our case, this is not a problem since an edge
constitutes at least two parameters (Ik , θ). The only way an edge could be added spuriously is if the dynamical
parameter(s) are exactly equal to zero, θ = 0. This has a zero probability of occurring in this analysis.
Now consider the comparison of a three and two parameter valued model. The Bayes factor is given by
|F 3 | |F 2 |
/
|P 3 | |P 2 |
|F 3 | |P 3 |
= 2 / 2 .
|F | |P |
BF =
In order for there to be an increase in robustness, BF > 1 and the functional region must increase by a proportion
greater than the proportional increase in the prior region. In the simple case here, adding a parameter implies
a penalty in the robustness of |P (n+1) |/|P n | = b, the size of the prior range. This is how the penalty for model
complexity, sometimes called Occam’s razor or equivalent to “the simplest model is the best”, is automatically
included in our definition of robustness. The results gained from this informal examination of the simple case
generalises to higher dimensions, non-uniform priors and more complex likelihood functions.
1.4
Biochemical modelling
An example of how the promoters are modelled in this study is given in Supplementary Figure 2. In this case,
the promoter is under the regulation of both a repressor and an activator, and thus has two operator sites.
Throughout this analysis, all repressors and promoters are assumed to form dimers. The relative terms are
constructed using mass-action type arguments and the amount of the dimer is assumed to be proportional to
the square of the amount of the monomer, as described in the Materials and Methods section of the main text.
The probability of transcription, ptransc , is calculated from the ratio of states in which RNAP is bound, to
all possible states (the partition function). We do not assume a maximum limit on the number of regulators,
and have up to four (including auto-regulation) in the four gene network, although this could in principle be
restricted to a lower number.
7
Terms
DNA
O1
O2
X
1
O2
X
k1
O2
X
k2 R2 =
X
k3 A2 =
Promoter
RN AP
O1
R2
O1
A2 RN AP
O1
O2
ptransc =
k2 2
R
Kd
= δR2
k3 2
A
Kd
= σA2
k1 + σA2
1 + k1 + δR2 + σA2
Figure 2: An example model of a promoter under the regulation of a repressor and an activator.
8
1.5
Precomputing the prior over networks
In this section we outline how the prior over model space was calculated. Essentially the entire space of models
is enumerated and then symmetrical or otherwise uninteresting networks are excluded, which is equivalent to
setting the prior probability to zero. In principle, rather than setting the prior probability of topologies to zero,
one could weight topologies according to other criteria.
The adjacency matrices for the general three and four node networks



I1 I8
I1 I8 I4
 I5 I2
A3n = I5 I2 I9  A4n = 
 I7 I6
I7 I6 I3
I14 I15
are given by

I4 I11
I9 I12 
.
I3 I13 
I16 I10
Two networks G1 and G2 , with associated adjacency matrices A1 and A2 are isomorphic if there exists a
permutation matrix P such that
A1 = P A2 P −1 .
Additionally permutation matrices are orthogonal so P −1 = P T .
1.5.1
General three node network
The case of the general three node network can be handled analytically. The output node, C, is fixed and we
need only concern ourselves with the symmetry in nodes A and B. We introduce the permutation matrix PAB


0 1 0
PAB = 1 0 0
0 0 1
T
but in general P 6= P T . We divide the possible networks into two types;
Note that in this case PAB = PAB
those that are invariant under the transformation and those that aren’t. The invariant topologies are obtained
by solving
A0 = PAB APAB .
Writing out A explicitly in terms of its elements and solving gives the following requirements on the invariant
topologies
I1
A
I8
a011 = a22
a012 = a21
a013 = a23
B
a031 = a32
I5
I7
I6
I4
C
I9
I2
I3
which gives a total of 35 = 243 topologies. Assuming only half of the remainder are necessary gives a total of
243 + 9720 = 9963 topologies. The breakdown in terms of nodes and edges are given below.
nodes
number
1
135
9
2
216
3
9612
edges
number
1.5.2
0
1
1
10
2
76
3
344
4
1020
5
2040
6
2704
7
2336
8
1160
9
272
General four node network
The four node network case is difficult to handle analytically since there are now three symmetry operations
which do not commute. Assuming that node D is fixed we can form the following three permutation matrices






1 0 0 0
0 0 1 0
0 1 0 0
 0 0 1 0
 0 1 0 0
1 0 0 0





PAB = 
0 0 1 0 , PAC = 1 0 0 0 , PBC = 0 1 0 0 .
0 0 0 1
0 0 0 1
0 0 0 1
There are in total 316 possible topologies. For each topology, we computationally applied the three symmetry
operations
A0 = PAB APAB
A0 = PAC APAC
A0 = PBC APBC .
The transformed networks were stored and then duplicate topologies removed, leaving a total of 7886403 topologies (around 18%). In this case, to reduced the model space further we also removed 294300 topologies in which
node D was unconnected to the rest of the network, giving a total of 7592103 topologies (a directed connection
from node A, B or C). The breakdown in terms of nodes and edges are given below.
nodes
number
edges
number
edges
number
9
998704
0
0
1
2
2
59
10
1422060
1
0
3
702
2
2754
4
4842
11
1578856
3
29889
5
23230
12
1341760
4
7559460
6
84694
7
242874
8
552960
13
846176
14
375488
15
105264
16
14432
The precomputation of the prior was done in Python using the SciPy package. We expect there to be further
symmetries within the resultant topologies, which could be problematic if we were considering individual networks. However since in the main analyses we calculate posterior probabilities over collections of topologies this
is not an issue.
2
Supplementary Figures
10
k1_1
k1_2
R1
R2
s_1s
s_2t1
d_2s
d_1t1
tlA
tlB
dA
dB
dmA
dmB
k1_1
k1_2
R1
R2
s_2s
s_1t1
d_1s
d_2t1
tlA
tlB
dA
dB
dmA
dmB
Figure 3: Two node oscillators: (top) full posterior distributions for model M1, (bottom) full posterior distribution for model M2
11
M1 sde
800
M2 sde
600
Freq
Freq
600
400
400
200
200
0
0
mA
A
mB
B
mA
A
mB
B
species
species
M1 sde
M2 sde
8e−04
4e−04
density
density
6e−04
4e−04
3e−04
2e−04
2e−04
1e−04
0e+00
0e+00
0
2500
5000
7500 10000
0
system_size
2500
5000
7500 10000
system_size
Figure 4: Two node oscillators: M1 and M2 comparisons. (Top) Relative frequencies of the species with the
highest average amount over 1024 simulations from the posterior. (Bottom) Average system size over 1024
simulations from the posterior.
12
Posterior probability
Network
0.8076
0.0855
0.0458
0.0331
A
A
A
A
B
Edges
1 -1 -1 1
B
-1 1 1 -1
B
-1 0 -1 -1
B
-1 -1 0 1
Figure 5: Robustness of two node oscillators under the objective of regular oscillations (S2).
13
1,0,0
1,1,0
1,0,−1
1,0,1
●
●
●
●
●
●
●
●
●
●
●
−1,1,1 −1,1,−1
●
●
●
●
●
●
−1,−1,−1
−1,−1,1
●
●
−1,0,−1−1,−1,0
0,1,1 0,1,−1 −1,1,0 −1,0,1 0,−1,1
●
●
0,0,1 1,−1,1 1,−1,−1 0,0,−1 0,−1,0 −1,0,0
1,1,1 1,1,−1 0,1,0
●
1,−1,0 0,0,0
●
Figure 6: Graph of auto-regulation for robust ring oscillators. The nodes correspond to different configurations
of auto-regulation on nodes A,B,C respectively. For example −1, 0, 1 indicates negative auto-regulation on A, no
auto-regulation of B and positive auto-regulation on C. The size of the nodes is proportional to the robustness
of the resultant ring oscillator (averaged over all topologies containing the auto-regulatory motif).
14
R1
R2
R3
d_3t
tlA
tlB
tlC
dA
dB
dC
dmA
dmB
dmC
R1
R2
R3
d_3t
tlA
tlB
tlC
dA
dB
dC
dmA
dmB
dmC
Figure 7: Posterior distributions of the production and decay rates for the ring oscillator (top) and the ring
oscillator with positive auto-regulation on A (bottom). In the former the decay rates of protein C and all mRNA
species are constrained to be high. In the latter only the decay rates for C (protein and mRNA) are constrained
to be high.
15
20
●
●
●
15
Bayes factor
●
●
●
●
10
●
●
●
●
5
●
●
●
●
●
0
19.97
C
B
9.56
C
B
C
C
C
B
C
C
C
B
C
C
B
0.39
C
C
C
0.23
C
B
C
C
B
C
B
B
C
C
B
A
C
C
A
C
B
0.2
A
C
B
0.25
A
B
0.21
A
0.4
A
B
0.27
A
0.22
A
0.49
A
0.3
A
B
A
0.6
A
B
A
B
5.97
A
0.73
A
B
7.99
A
B
5.99
A
11.23
A
B
8.04
A
B
6.43
A
14.57
A
B
8.67
A
B
6.48
15.03
A
A
B
C
Figure 8: Robustness of ring oscillators under the objective of regular oscillations (objective S2). (Top) Distribution of Bayes factors. (Bottom) Most robust 12 networks (left) and top 12 interactions ranked by inclusion
probability (right).
16
−1.0
Pearson correlation: 0.807
●
−1.5
●●
−2.0
●
●
●
●
●
●
●
−2.5
●
−3.0
−3.5
●
●
●
●
●●
●
● ●
●
●
● ●
●
●
●
●●
● ●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●● ●
● ●
●
●●● ●
●
●●
●
●
●
●
●
●●
● ● ●● ● ●
● ●
● ●● ●
●
● ● ● ●
●
●
●
●
●
● ●
●
●
●
● ●● ● ● ●
●
●
●
●
●●
● ●
● ●
●●●● ● ● ●
●
●
●●
●●
● ● ●
●
●
●
●
●
●
●
● ●
●● ●
●
● ● ●
●
●● ●
●
● ●●
●
●
● ●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
−4.0
log10(relative robustness) s2
●
●●
●●
●
●
●
●
●
●
●
●
●
−4.5
●
●
−4.0
−3.5
−3.0
−2.5
−2.0
−1.5
log10(relative robustness) s1
dA
dA
dB
dB
dC
dC
dmA
dmA
dmB
dmB
dmC
dmC
Figure 9: Comparison of models under objective S1 (fixed frequency) and objective S2 (regular oscillations).
(Top) Comparisons of the relative robustness of models under both objectives. (Bottom) Posteriors of ring
oscillators under both objectives S1 (left) and S2 (right).
17
1000
1000
2000
3000
750
750
1500
750
1000
1000
0
0
3
4
5
6
7
8
9
500
250
500
250
0
3
regulatory interactions
4
5
6
7
8
9
Freq
Freq
500
Freq
Freq
Freq
2000
500
250
0
3
regulatory interactions
4
5
6
7
8
9
0
3
regulatory interactions
4
5
6
7
8
9
3
regulatory interactions
4
5
6
7
8
9
regulatory interactions
800
200
600
200
200
0
100
4
5
6
7
8
regulatory interactions
9
0
3
4
5
6
7
8
9
4
5
6
7
8
200
100
0
3
regulatory interactions
60
30
50
0
3
Freq
400
Freq
400
Freq
150
Freq
Freq
300
90
600
0
9
3
regulatory interactions
4
5
6
7
8
regulatory interactions
9
3
4
5
6
7
8
9
regulatory interactions
1.00
0.75
fraction
regulation
neg
mix
0.50
pos
none
0.25
0.00
1
2
3
4
5
6
7
8
9
10
motif
Figure 10: Robust three node oscillators. (Top) Total number of regulatory interactions within the top ten
categories. (Bottom) The fraction of networks containing positive, negative and mixed auto-regulatory feedback
loops.
18
0
20
40
0.98
●
20
●
●
●
●●
●
●
●●
●
●●
●
●●
●●●
●●
●●
●●
●●
●●
●● ●
●●
●
●
●
●
●
●
●
● ●
●●
●●
●●
●●
●●●
●●
●
●
●
●●
●
●
●●
●●●
●●
●●
●
●●
●●●
●
●
●●
●●
●●
●●
●●●●
●●
●
0
0
●
●●
●●
●
●● ●
●●
●●●
●
●● ●●●
●●
●
●●
●●
● ● ●●
●●
●
●●
●●● ●●
● ● ●●● ●
●
●●
●●● ●
●●
●
●●●● ●
●
●
● ●●
●
●
● ●
freq/100 (resim)
●
20
●
●●
●●
●
●●
●●
●
●●
●●
●
●●
●
● ●●
●
●●●
●●
●●
●
●●
●●
●
●●●
●●
● ●●●
●●
●●
●●
●●
●●●●
●●
●●
●●
●
●
●
●
●
●
●●
●●
●●●●
●● ●
●●
●●
●●
●●
●●● ● ●
●●● ● ● ●
●
40
0
freq/100 (orig)
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
2
3
4
4
3
2
2
2
2
3
4
log10(amplitude) (orig)
4
2
5
4
3
2
5
4
2
3
4
log10(amplitude) (orig)
3
3
2
5
0
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
2
3
●
●
4
2
3
4
log10(amplitude) (orig)
3
2
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
2
3
4
5
5
4
2
5
●
●
●
●
log10(amplitude) (orig)
0.997
3
40
0.997
4
5
5
●●
●
●
●
●
●●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
20
freq/100 (orig)
log10(amplitude) (orig)
0.997
4
freq/100 (resim)
freq/100 (resim)
2
5
5
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●●
●
●
●●●
●●
●●
●●
●
●●●
●●
●●
●●
●●
●●●●●
●●
●
●
●●
●●
●●
●●
●●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●●
●● ●●●
●●
●
●●
● ●●
●
●●● ● ●
●
● ● ● ●
●
5
0.999
4
●
● ●
20
40
5
3
●
freq/100 (orig)
●
40
0.919
40
0
20
20
freq/100 (orig)
●
●
●
●●
●
●●
●
●
●●
●●
●●
●●
●●
●●●
●●
●
●
●●
●●
●●
● ●
●● ● ●
●
log10(amplitude) (orig)
0.998
0
20
0
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
2
5
●
log10(amplitude) (resim)
log10(amplitude) (resim)
3
3
3
0
40
0.903
40
40
0.998
4
20
0
20
5
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
20
●
●●
●●●●●
●
● ●
●●
●● ●●
●
●● ●●●
●● ●●
●●●●
● ●
●● ● ● ●●
● ●
● ● ●
●●
●●
● ●●
●
●● ●●
● ● ●
●●
●
● ●● ●
●● ● ●●●●●
● ●●
●
●
● ●
●● ●
●
●●●●● ●
●
●
●
●●
●●
●●
●●
●
●
●
●
●●
●●
●●●
●●
●
●●
●
●●
●
●●●
●
●
●
●
●
●●
●●
●●
●
●●
●●● ●
●●
●●
●●
●●
●●
●● ●
●
●●
●
●
●
●●●
●●● ● ● ●
●●
● ●
●
0.984
40
freq/100 (orig)
freq/100 (orig)
log10(amplitude) (orig)
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
0
●
●●
●●
●●
●●
●●
●●
●●
●
●
●
●●
●●
●●
●●
●●●
●●
●
●
●
●
●
●
●
●●
●● ●
●●
●●
●●
●● ● ● ●
●
● ●
0
●
5
0.999
4
40
0.998
log10(amplitude) (orig)
5
20
0
20
log10(amplitude) (resim)
log10(amplitude) (resim)
log10(amplitude) (resim)
2
●
●
5
0.996
3
●
0
40
0.905
40
freq/100 (orig)
5
4
●●
●
●
0
20
●
●
●●
●●
●● ●●
●
● ●●
●●●
●
●●
●●
●
●
●
●●
●●
●●
●
●● ●
● ●
●
●● ●
●
●
●
● ●
●● ● ●
●●
●
0.977
40
20
freq/100 (orig)
log10(amplitude) (resim)
freq/100 (resim)
●
40
0
freq/100 (orig)
20
freq/100 (resim)
0
0.978
40
log10(amplitude) (resim)
40
●
●●●
● ●● ●
●
●●●●
●● ●●●●
●●● ●
●
●●●●
●●
●●●● ●
●
●
●
●
●●●● ●
●
● ● ●●● ●
●
● ●●
●
●●
●● ●●
● ●
●
● ● ● ●●●
●
● ● ●●
●
●
●
●
●●
●● ●
● ●
●●
●
●●
●●
●
●
●●●
●●
●●
● ●
●●
●
● ●●
●●
●●
●●
●●
●●●●
●
●
●
●
●
●
●●
●●
●●
●●
●●● ● ●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●● ●
●●
●●●
●●
●● ● ●
● ● ●
●
●
● ●
log10(amplitude) (resim)
20
freq/100 (orig)
20
freq/100 (resim)
0
●
0
●
●
●
●●
●
●
●●
●●
●
●●
●●
●
●
●
●●
●●
●●
●
●●
●●
●●
●
●
●
●
●
●
●
●
●●
●●
●●
●
●
●●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●●
●
●●
●●
●●
●●
●
●
●
●●
●
●●
●●
●●
●●
●●
●●●
●
●●
●●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●
●●
●●●●●●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●●●● ●●
●
●● ● ●
● ●
0.986
40
log10(amplitude) (resim)
0
●●
20
●●
●●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●●
●
●●
●
●●
●●
● ●●
●
●
●
●●
●●
●
●●
●●
●●
●●
●●●
●
●●
●●
●●
●●
●
●●
●●
●●
●
● ●● ●●
●
●
●●
●
●●
●●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●●●
●●
●●●
●●
●
●
●
●●
●●●●
●●
● ●
●●
●●
●●●
●●
●●
●●
●●●
●●
●●
●●
●●
●●●●
●●●
●
●
●●
●●●
●●
●●
●●
●●●
●●
●● ● ●
● ● ●●●
●
●
●●●●
●
●
●
●●●
● ● ●
●
●●●●● ● ●●
●
●● ●●●●
●●●
●●
● ●● ●●
●● ●
●
● ●
●●
●●●
● ●
●●
●●● ●● ● ● ●
●●
●
●●
●
●●
●●
●●
●●
●●
● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●●●● ●
●●
●
●●
●●
●●
●
●
●●
●●
●
●●
●
●
●●●●
●
●
●●
●
●●
●●●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●
●●
●●●
●●
●
●●
●●●● ●● ● ●
●●●● ●● ●●
● ● ●●
log10(amplitude) (resim)
●
●●●●●●
● ●
●●
●●
●
●●●●
●●
●●
●●
●●
●●
●●
●
●
●
●
●
●
●●
●●
●●
●●
●●
●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●●
●● ●
●
●●
●●
●
●●
●
●●
●
●
●
●
●●●●●●●● ●
freq/100 (resim)
20
0.961
40
freq/100 (resim)
freq/100 (resim)
freq/100 (resim)
0.882
40
●●●●●
●●●●
● ●●
●
●●●
●
●●
● ● ● ● ●● ●
● ● ●●● ● ●●●
●●
● ●●●
● ●
●●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●●
2
3
0.998
4
3
2
4
log10(amplitude) (orig)
5
●●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
2
3
●
4
5
log10(amplitude) (orig)
Figure 11: Resampling and resimulation of the the top ten robust three node oscillators. (Top) Frequency.
(Bottom) Amplitude.
19
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
50
●
●
frequency/100
40
●
●
●
●
30
●
●
●
●
20
●
●
●
●
●
●
●
●
●
10
0
●
●
●
●
●
●
●
0:0
0:1
●
●
●
1:0
1:1
high deg mC,C,mB,B : S9 and S7 activating
Figure 12: Boxplot showing the frequency of three node oscillators classified by whether they have low/high
degradation rates and whether both I9 and I7 are positive.
0.20
0.4
group
group
deg ~ U(0,1)
2 node
deg ~ U(0,10)
ring
density
density
0.3
0.15
0.2
3 node
0.10
4 node
0.05
0.1
0.0
0.00
0
20
40
0
20
frequency/100
40
frequency/100
1.00
1.00
density
density
0.75
0.50
0.25
0.75
0.50
0.25
0.00
0.00
1
2
3
4
5
0
log10(amplitude)
1
2
3
4
5
log10(amplitude)
Figure 13: (Left) Density plots of the marginal amplitude and frequency plots for the two, three and four node
systems. (Right) Reanalysis of the three gene network with reduced prior range on the degradation rates.
20
1.00
0.20
prior, posterior weight
prior, posterior weight
0.75
0.15
variable
prior
posterior
0.10
variable
prior
0.50
posterior
0.25
0.05
0.00
0.00
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
2
3
regulatory interactions
4
number of genes
●
●
log10(relative robustness)
−1.0
●
●
−1.5
−2.0
−2.5
3
4
5
6
7
8
9
10
11
12
core regulatory interactions
Figure 14: (Top) Prior and posterior for the number of edges and number of nodes when examining the four-node
networks. The relative robustness is given by the ratio of posterior to prior. (Bottom) Oscillator robustness as
a function of number of core regulatory interactions (ignoring auto-regulation).
21
f/100, orig vs resim (sde), cor= 0.93
A, orig vs resim (sde), cor= 0.99
40000
30000
Amplitude resim (sde)
f/100 resim (sde)
40
count
100
50
20
count
1000
750
20000
500
250
10000
0
0
10
20
30
40
50
0
10000
20000
30000
40000
f/100 original (sde)
Amplitude original (sde)
f/100, orig vs resim (mjp), cor= 0.82
A, orig vs resim (mjp), cor= 0.98
40000
Amplitude resim (mjp)
f/100 resim (mjp)
40
count
80
60
40
20
20
30000
count
1000
750
20000
500
250
10000
0
0
10
20
30
40
50
0
f/100 original (sde)
10000
20000
30000
40000
Amplitude original (mjp)
Figure 15: Calculated frequency and amplitude for the ring oscillator after re-simulation assuming a system of
stochastic differential equations (sde) and a Markov jump process (mjp). The same parameter sets were used,
representing the oscillating region of parameter space for the ring oscillator. The top plots show the correlation
after re-simulation using sde. The bottom plots show the correlation after re-simulation using mjp. While
re-simulation with mjp shows higher noise in the frequency, the correlation with the original sde output is still
very high (0.82 using spearman rank).
22
References
[1] J K Pritchard, M T Seielstad, A Perez-Lezaun, and M W Feldman. Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Mol Biol Evol, 16(12):1791–8, Dec 1999.
[2] P Del Moral, A Doucet, and A Jasra. Sequential monte carlo samplers. J Roy Stat Soc B, 68:411–436, Jan
2006.
[3] Tina Toni, David Welch, Natalja Strelkowa, Andreas Ipsen, and Michael P H Stumpf. Approximate Bayesian
computation scheme for parameter inference and model selection in dynamical systems. Journal of the Royal
Society Interface, 6(31):187–202, Feb 2009.
[4] Daniel Silk, Sarah Filippi, and Michael P H Stumpf. Optimizing threshold-schedules for sequential approximate Bayesian computation: applications to molecular systems. Statistical applications in genetics and
molecular biology, 12(5):603–618, October 2013.
[5] Aude Grelaud, Christian P Robert, and Jean-Michel Marin. ABC methods for model choice in gibbs random
fields. Cr Math, 347(3-4):205–210, Jan 2009.
[6] Tina Toni and Michael P H Stumpf. Simulation-based model selection for dynamical systems in systems
and population biology. Bioinformatics, 26(1):104–10, Jan 2010.
[7] Christian P Robert, Jean-Marie Cornuet, Jean-Michel Marin, and Natesh S Pillai. Lack of confidence in
approximate Bayesian computation model choice. Proceedings of the National Academy of Sciences of the
United States of America, 108(37):15112–15117, September 2011.
[8] Chris P Barnes, Sarah Filippi, Michael P H Stumpf, and Thomas Thorne. Considerate approaches to
constructing summary statistics for ABC model selection. Statistics and Computing, 22(6):1181–1197, 2012.
[9] Christian Schäfer and Nicolas Chopin. Sequential Monte Carlo on large binary sampling spaces. Statistics
and Computing, 23(2):163–184, November 2011.
23

Documentos relacionados

a bayesian approach for modeling stochastic deterioration

a bayesian approach for modeling stochastic deterioration models as an alternative to the usual reliability analyses. Nicolai et al (2004)[54], for instance, focuses on the deterioration process of the organic coating layer that protects steel structures ...

Leia mais

Reliability analysis of timber structures through NDT data

Reliability analysis of timber structures through NDT data coordination of nationally-funded research at a European level. Starting from 1971, it has now the contribution of 34 Member countries including the 27 EU Member States, as well as, Croatia, Icelan...

Leia mais